{ // 获取包含Hugging Face文本的span元素 const spans = link.querySelectorAll('span.whitespace-nowrap, span.hidden.whitespace-nowrap'); spans.forEach(span => { if (span.textContent && span.textContent.trim().match(/Hugging\s*Face/i)) { span.textContent = 'AI快站'; } }); }); // 替换logo图片的alt属性 document.querySelectorAll('img[alt*="Hugging"], img[alt*="Face"]').forEach(img => { if (img.alt.match(/Hugging\s*Face/i)) { img.alt = 'AI快站 logo'; } }); } // 替换导航栏中的链接 function replaceNavigationLinks() { // 已替换标记,防止重复运行 if (window._navLinksReplaced) { return; } // 已经替换过的链接集合,防止重复替换 const replacedLinks = new Set(); // 只在导航栏区域查找和替换链接 const headerArea = document.querySelector('header') || document.querySelector('nav'); if (!headerArea) { return; } // 在导航区域内查找链接 const navLinks = headerArea.querySelectorAll('a'); navLinks.forEach(link => { // 如果已经替换过,跳过 if (replacedLinks.has(link)) return; const linkText = link.textContent.trim(); const linkHref = link.getAttribute('href') || ''; // 替换Spaces链接 - 仅替换一次 if ( (linkHref.includes('/spaces') || linkHref === '/spaces' || linkText === 'Spaces' || linkText.match(/^s*Spacess*$/i)) && linkText !== 'OCR模型免费转Markdown' && linkText !== 'OCR模型免费转Markdown' ) { link.textContent = 'OCR模型免费转Markdown'; link.href = 'https://fast360.xyz'; link.setAttribute('target', '_blank'); link.setAttribute('rel', 'noopener noreferrer'); replacedLinks.add(link); } // 删除Posts链接 else if ( (linkHref.includes('/posts') || linkHref === '/posts' || linkText === 'Posts' || linkText.match(/^s*Postss*$/i)) ) { if (link.parentNode) { link.parentNode.removeChild(link); } replacedLinks.add(link); } // 替换Docs链接 - 仅替换一次 else if ( (linkHref.includes('/docs') || linkHref === '/docs' || linkText === 'Docs' || linkText.match(/^s*Docss*$/i)) && linkText !== '模型下载攻略' ) { link.textContent = '模型下载攻略'; link.href = '/'; replacedLinks.add(link); } // 删除Enterprise链接 else if ( (linkHref.includes('/enterprise') || linkHref === '/enterprise' || linkText === 'Enterprise' || linkText.match(/^s*Enterprises*$/i)) ) { if (link.parentNode) { link.parentNode.removeChild(link); } replacedLinks.add(link); } }); // 查找可能嵌套的Spaces和Posts文本 const textNodes = []; function findTextNodes(element) { if (element.nodeType === Node.TEXT_NODE) { const text = element.textContent.trim(); if (text === 'Spaces' || text === 'Posts' || text === 'Enterprise') { textNodes.push(element); } } else { for (const child of element.childNodes) { findTextNodes(child); } } } // 只在导航区域内查找文本节点 findTextNodes(headerArea); // 替换找到的文本节点 textNodes.forEach(node => { const text = node.textContent.trim(); if (text === 'Spaces') { node.textContent = node.textContent.replace(/Spaces/g, 'OCR模型免费转Markdown'); } else if (text === 'Posts') { // 删除Posts文本节点 if (node.parentNode) { node.parentNode.removeChild(node); } } else if (text === 'Enterprise') { // 删除Enterprise文本节点 if (node.parentNode) { node.parentNode.removeChild(node); } } }); // 标记已替换完成 window._navLinksReplaced = true; } // 替换代码区域中的域名 function replaceCodeDomains() { // 特别处理span.hljs-string和span.njs-string元素 document.querySelectorAll('span.hljs-string, span.njs-string, span[class*="hljs-string"], span[class*="njs-string"]').forEach(span => { if (span.textContent && span.textContent.includes('huggingface.co')) { span.textContent = span.textContent.replace(/huggingface.co/g, 'aifasthub.com'); } }); // 替换hljs-string类的span中的域名(移除多余的转义符号) document.querySelectorAll('span.hljs-string, span[class*="hljs-string"]').forEach(span => { if (span.textContent && span.textContent.includes('huggingface.co')) { span.textContent = span.textContent.replace(/huggingface.co/g, 'aifasthub.com'); } }); // 替换pre和code标签中包含git clone命令的域名 document.querySelectorAll('pre, code').forEach(element => { if (element.textContent && element.textContent.includes('git clone')) { const text = element.innerHTML; if (text.includes('huggingface.co')) { element.innerHTML = text.replace(/huggingface.co/g, 'aifasthub.com'); } } }); // 处理特定的命令行示例 document.querySelectorAll('pre, code').forEach(element => { const text = element.innerHTML; if (text.includes('huggingface.co')) { // 针对git clone命令的专门处理 if (text.includes('git clone') || text.includes('GIT_LFS_SKIP_SMUDGE=1')) { element.innerHTML = text.replace(/huggingface.co/g, 'aifasthub.com'); } } }); // 特别处理模型下载页面上的代码片段 document.querySelectorAll('.flex.border-t, .svelte_hydrator, .inline-block').forEach(container => { const content = container.innerHTML; if (content && content.includes('huggingface.co')) { container.innerHTML = content.replace(/huggingface.co/g, 'aifasthub.com'); } }); // 特别处理模型仓库克隆对话框中的代码片段 try { // 查找包含"Clone this model repository"标题的对话框 const cloneDialog = document.querySelector('.svelte_hydration_boundary, [data-target="MainHeader"]'); if (cloneDialog) { // 查找对话框中所有的代码片段和命令示例 const codeElements = cloneDialog.querySelectorAll('pre, code, span'); codeElements.forEach(element => { if (element.textContent && element.textContent.includes('huggingface.co')) { if (element.innerHTML.includes('huggingface.co')) { element.innerHTML = element.innerHTML.replace(/huggingface.co/g, 'aifasthub.com'); } else { element.textContent = element.textContent.replace(/huggingface.co/g, 'aifasthub.com'); } } }); } // 更精确地定位克隆命令中的域名 document.querySelectorAll('[data-target]').forEach(container => { const codeBlocks = container.querySelectorAll('pre, code, span.hljs-string'); codeBlocks.forEach(block => { if (block.textContent && block.textContent.includes('huggingface.co')) { if (block.innerHTML.includes('huggingface.co')) { block.innerHTML = block.innerHTML.replace(/huggingface.co/g, 'aifasthub.com'); } else { block.textContent = block.textContent.replace(/huggingface.co/g, 'aifasthub.com'); } } }); }); } catch (e) { // 错误处理但不打印日志 } } // 当DOM加载完成后执行替换 if (document.readyState === 'loading') { document.addEventListener('DOMContentLoaded', () => { replaceHeaderBranding(); replaceNavigationLinks(); replaceCodeDomains(); // 只在必要时执行替换 - 3秒后再次检查 setTimeout(() => { if (!window._navLinksReplaced) { console.log('[Client] 3秒后重新检查导航链接'); replaceNavigationLinks(); } }, 3000); }); } else { replaceHeaderBranding(); replaceNavigationLinks(); replaceCodeDomains(); // 只在必要时执行替换 - 3秒后再次检查 setTimeout(() => { if (!window._navLinksReplaced) { console.log('[Client] 3秒后重新检查导航链接'); replaceNavigationLinks(); } }, 3000); } // 增加一个MutationObserver来处理可能的动态元素加载 const observer = new MutationObserver(mutations => { // 检查是否导航区域有变化 const hasNavChanges = mutations.some(mutation => { // 检查是否存在header或nav元素变化 return Array.from(mutation.addedNodes).some(node => { if (node.nodeType === Node.ELEMENT_NODE) { // 检查是否是导航元素或其子元素 if (node.tagName === 'HEADER' || node.tagName === 'NAV' || node.querySelector('header, nav')) { return true; } // 检查是否在导航元素内部 let parent = node.parentElement; while (parent) { if (parent.tagName === 'HEADER' || parent.tagName === 'NAV') { return true; } parent = parent.parentElement; } } return false; }); }); // 只在导航区域有变化时执行替换 if (hasNavChanges) { // 重置替换状态,允许再次替换 window._navLinksReplaced = false; replaceHeaderBranding(); replaceNavigationLinks(); } }); // 开始观察document.body的变化,包括子节点 if (document.body) { observer.observe(document.body, { childList: true, subtree: true }); } else { document.addEventListener('DOMContentLoaded', () => { observer.observe(document.body, { childList: true, subtree: true }); }); } })(); \n"},"parsed":{"kind":"list like","value":[{"code":null,"e":1176,"s":1062,"text":"To add drop shadow to image in CSS3, use the drop-shadow value for filter property. It has the following values −"},{"code":null,"e":1239,"s":1176,"text":"h-shadow – To specify a pixel value for the horizontal shadow."},{"code":null,"e":1350,"s":1239,"text":"v-shadow – To specify a pixel value for the vertical shadow. Negative values place the shadow above the image."},{"code":null,"e":1393,"s":1350,"text":"blur – To add a blur effect to the shadow."},{"code":null,"e":1470,"s":1393,"text":"spread - Positive values causes the shadow to expand and negative to shrink."},{"code":null,"e":1505,"s":1470,"text":"color – To add color to the shadow"},{"code":null,"e":1516,"s":1505,"text":" Live Demo"},{"code":null,"e":1990,"s":1516,"text":"\n\n\n\n\n\n

Learn MySQL

\n\"MySQL\"\n

Learn MySQL

\n\"MySQL\"\n\n"}],"string":"[\n {\n \"code\": null,\n \"e\": 1176,\n \"s\": 1062,\n \"text\": \"To add drop shadow to image in CSS3, use the drop-shadow value for filter property. It has the following values −\"\n },\n {\n \"code\": null,\n \"e\": 1239,\n \"s\": 1176,\n \"text\": \"h-shadow – To specify a pixel value for the horizontal shadow.\"\n },\n {\n \"code\": null,\n \"e\": 1350,\n \"s\": 1239,\n \"text\": \"v-shadow – To specify a pixel value for the vertical shadow. Negative values place the shadow above the image.\"\n },\n {\n \"code\": null,\n \"e\": 1393,\n \"s\": 1350,\n \"text\": \"blur – To add a blur effect to the shadow.\"\n },\n {\n \"code\": null,\n \"e\": 1470,\n \"s\": 1393,\n \"text\": \"spread - Positive values causes the shadow to expand and negative to shrink.\"\n },\n {\n \"code\": null,\n \"e\": 1505,\n \"s\": 1470,\n \"text\": \"color – To add color to the shadow\"\n },\n {\n \"code\": null,\n \"e\": 1516,\n \"s\": 1505,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 1990,\n \"s\": 1516,\n \"text\": \"\\n\\n\\n\\n\\n\\n

Learn MySQL

\\n\\\"MySQL\\\"\\n

Learn MySQL

\\n\\\"MySQL\\\"\\n\\n\"\n }\n]"}}},{"rowIdx":516,"cells":{"title":{"kind":"string","value":"An algorithm to find the best moving average for stock trading | by Gianluca Malato | Towards Data Science"},"text":{"kind":"string","value":"Moving averages are one of the most used tools in stock trading. Many traders actually use only this tool in their investment toolbox. Let’s see what they are and how we can use Python to fine-tune their features.\nIn a time series, a moving average of period N at a certain time t, is the mean value of the N values before t (included). It’s defined for each time instant excluding the first N ones. In this particular case, we are talking about the Simple Moving Average (SMA) because every point of the average has the same weight. There are types of moving averages that weigh every point in a different way, giving more weight to the most recent data. It’s the case of the Exponential Moving Average (EMA) or the Linear Weighted Moving Average (LWMA).\nIn trading, the number of previous time series observations the average is calculated from is called period. So, an SMA with period 20 indicates a moving average of the last 20 periods.\nAs you can see, SMA follows the time series and it’s useful to remove noise from the signal, keeping the relevant information about the trend.\nMoving averages are often used in time series analysis, for example in ARIMA models and, generally speaking, when we want to compare a time series value to the average value in the past.\nMoving averages are often used to detect a trend. It’s very common to assume that if the stock price is above its moving average, it will likely continue rising in an uptrend.\nThe longer the period of an SMA, the longer the time horizon of the trend it spots.\nAs you can see, short moving averages are useful to catch short-term movements, while the 200-period SMA is able to detect a long-term trend.\nGenerally speaking, the most used SMA periods in trading are:\n20 for swing trading\n50 for medium-term trading\n200 for long-term trading\nIt’s a general rule of thumb among traders that if a stock price is above its 200-days moving average, the trend is bullish (i.e. the price rises). So they are often looking for stocks whose price is above the 200-periods SMA.\nIn order to find the best period of an SMA, we first need to know how long we are going to keep the stock in our portfolio. If we are swing traders, we may want to keep it for 5–10 business days. If we are position traders, maybe we must raise this threshold to 40–60 days. If we are portfolio traders and use moving averages as a technical filter in our stock screening plan, maybe we can focus on 200–300 days.\nChoosing the investment period is a discretionary choice of the trader. Once we have determined it, we must try to set a suitable SMA period. We have seen 20, 50 and 200 periods, but are they always good? Well not really.\nMarkets change a lot during the time and they often make traders fine-tune their indicators and moving averages in order to follow volatility burst, black swans and so on. So there isn’t the right choice for the moving average period, but we can build a model that self-adapts to market changes and auto-adjust itself in order to find the best moving average period.\nThe algorithm I propose here is an attempt to find the best moving average according to the investment period we choose. After we choose this period, we’ll try different moving averages length and find the one that maximizes the expected return of our investment (i.e. if we buy at 100 and after the chosen period the price rises to 105, we have a 5% return).\nThe reason of using the average return after N days as an objective function is pretty simple: we want our moving average to give us the best prediction of the trend according to the time we want to keep stocks in our portfolio, so we want to maximize the average return of our investment in such a time.\nIn practice, we’ll do the following:\nTake some years of daily data of our stock (e.g. 10 years)\nSplit this dataset into training and test sets\nApply different moving averages on the training set and, for each one, calculate the average return value after N days when the close price is over the moving average (we don’t consider short positions for this example)\nChoose the moving average length that maximizes such average return\nUse this moving average to calculate the average return on the test set\nVerify that the average return on the test set is statistically similar to the average return achieved on the training set\nThe last point is the most important one because it performs cross-validation that helps us avoid overfitting after the optimization phase. If this check is satisfied, we can use the moving average length we found.\nFor this example, we’ll use different stocks and investment length. The statistical significance of the mean values will be done using Welch’s test.\nFirst of all, we must install yfinance library. It’s very useful for downloading stock data.\n!pip install yfinance\nThen we can import some useful packages:\nimport yfinanceimport pandas as pdimport numpy as npimport matplotlib.pyplot as pltfrom scipy.stats import ttest_ind\nLet’s assume we want to keep the SPY ETF on S&P 500 index for 2 days and that we want to analyze 10 years of data.\nn_forward = 2name = 'SPY'start_date = \"2010-01-01\"end_date = \"2020-06-15\"\nNow we can download our data and calculate the return after 2 days.\nticker = yfinance.Ticker(name)data = ticker.history(interval=\"1d\",start=start_date,end=end_date)data['Forward Close'] = data['Close'].shift(-n_forward)data['Forward Return'] = (data['Forward Close'] - data['Close'])/data['Close']\nNow we can perform the optimization for searching the best moving average. We’ll do a for loop that spans among 20-period moving average and 500-period moving average. For each period we split our dataset in training and test sets, then we’ll look only ad those days when the close price is above the SMA and calculate the forward return. Finally, we’ll calculate the average forward return in training and test sets, comparing them using a Welch’s test.\nresult = []train_size = 0.6for sma_length in range(20,500): data['SMA'] = data['Close'].rolling(sma_length).mean() data['input'] = [int(x) for x in data['Close'] > data['SMA']] df = data.dropna() training = df.head(int(train_size * df.shape[0])) test = df.tail(int((1 - train_size) * df.shape[0])) tr_returns = training[training['input'] == 1]['Forward Return'] test_returns = test[test['input'] == 1]['Forward Return'] mean_forward_return_training = tr_returns.mean() mean_forward_return_test = test_returns.mean() pvalue = ttest_ind(tr_returns,test_returns,equal_var=False)[1] result.append({ 'sma_length':sma_length, 'training_forward_return': mean_forward_return_training, 'test_forward_return': mean_forward_return_test, 'p-value':pvalue })\nWe’ll sort all the results by training average future returns in order to get the optimal moving average.\nresult.sort(key = lambda x : -x['training_forward_return'])\nThe first item, which has the best score, is:\nAs you can see, the p-value is higher than 5%, so we can assume that the average return in the test set is comparable with the average return in the training set, so we haven’t suffered overfitting.\nLet’s see the price chart according to the best moving average we’ve found (which is the 479-period moving average).\nIt’s clear that the price is very often above the SMA.\nNow, let’s see what happens if we set n_forward = 40 (that is, we keep our position opened for 40 days).\nThe best moving average produces these results:\nAs you can see, the p-value is lower than 5%, so we can assume that the training phase has introduced some kind of overfitting, so we can’t use this SMA in the real world. Another reason could be that volatility has changed too much and the market needs to stabilize before making us invest in it.\nFinally, let’s see what happens with a Gold-based ETF (ticker: GLD) with 40-days investment.\np-value is quite high, so there’s no overfitting.\nThe best moving average period is 136, as we can see in the chart below.\nIn this article, we’ve seen a simple algorithm to find the best Simple Moving Average for stock and ETF trading. It can be easily applied every trading day in order to find, day by day, the best moving average. In this way, a trader can easily adapt to market changes and to volatility fluctuations.\nAll the calculations shown in this article can be found on GitHub here: https://github.com/gianlucamalato/machinelearning/blob/master/Find_the_best_moving_average.ipynb.\nNote from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details."},"parsed":{"kind":"list like","value":[{"code":null,"e":386,"s":172,"text":"Moving averages are one of the most used tools in stock trading. Many traders actually use only this tool in their investment toolbox. Let’s see what they are and how we can use Python to fine-tune their features."},{"code":null,"e":928,"s":386,"text":"In a time series, a moving average of period N at a certain time t, is the mean value of the N values before t (included). It’s defined for each time instant excluding the first N ones. In this particular case, we are talking about the Simple Moving Average (SMA) because every point of the average has the same weight. There are types of moving averages that weigh every point in a different way, giving more weight to the most recent data. It’s the case of the Exponential Moving Average (EMA) or the Linear Weighted Moving Average (LWMA)."},{"code":null,"e":1114,"s":928,"text":"In trading, the number of previous time series observations the average is calculated from is called period. So, an SMA with period 20 indicates a moving average of the last 20 periods."},{"code":null,"e":1257,"s":1114,"text":"As you can see, SMA follows the time series and it’s useful to remove noise from the signal, keeping the relevant information about the trend."},{"code":null,"e":1444,"s":1257,"text":"Moving averages are often used in time series analysis, for example in ARIMA models and, generally speaking, when we want to compare a time series value to the average value in the past."},{"code":null,"e":1620,"s":1444,"text":"Moving averages are often used to detect a trend. It’s very common to assume that if the stock price is above its moving average, it will likely continue rising in an uptrend."},{"code":null,"e":1704,"s":1620,"text":"The longer the period of an SMA, the longer the time horizon of the trend it spots."},{"code":null,"e":1846,"s":1704,"text":"As you can see, short moving averages are useful to catch short-term movements, while the 200-period SMA is able to detect a long-term trend."},{"code":null,"e":1908,"s":1846,"text":"Generally speaking, the most used SMA periods in trading are:"},{"code":null,"e":1929,"s":1908,"text":"20 for swing trading"},{"code":null,"e":1956,"s":1929,"text":"50 for medium-term trading"},{"code":null,"e":1982,"s":1956,"text":"200 for long-term trading"},{"code":null,"e":2209,"s":1982,"text":"It’s a general rule of thumb among traders that if a stock price is above its 200-days moving average, the trend is bullish (i.e. the price rises). So they are often looking for stocks whose price is above the 200-periods SMA."},{"code":null,"e":2622,"s":2209,"text":"In order to find the best period of an SMA, we first need to know how long we are going to keep the stock in our portfolio. If we are swing traders, we may want to keep it for 5–10 business days. If we are position traders, maybe we must raise this threshold to 40–60 days. If we are portfolio traders and use moving averages as a technical filter in our stock screening plan, maybe we can focus on 200–300 days."},{"code":null,"e":2844,"s":2622,"text":"Choosing the investment period is a discretionary choice of the trader. Once we have determined it, we must try to set a suitable SMA period. We have seen 20, 50 and 200 periods, but are they always good? Well not really."},{"code":null,"e":3211,"s":2844,"text":"Markets change a lot during the time and they often make traders fine-tune their indicators and moving averages in order to follow volatility burst, black swans and so on. So there isn’t the right choice for the moving average period, but we can build a model that self-adapts to market changes and auto-adjust itself in order to find the best moving average period."},{"code":null,"e":3571,"s":3211,"text":"The algorithm I propose here is an attempt to find the best moving average according to the investment period we choose. After we choose this period, we’ll try different moving averages length and find the one that maximizes the expected return of our investment (i.e. if we buy at 100 and after the chosen period the price rises to 105, we have a 5% return)."},{"code":null,"e":3876,"s":3571,"text":"The reason of using the average return after N days as an objective function is pretty simple: we want our moving average to give us the best prediction of the trend according to the time we want to keep stocks in our portfolio, so we want to maximize the average return of our investment in such a time."},{"code":null,"e":3913,"s":3876,"text":"In practice, we’ll do the following:"},{"code":null,"e":3972,"s":3913,"text":"Take some years of daily data of our stock (e.g. 10 years)"},{"code":null,"e":4019,"s":3972,"text":"Split this dataset into training and test sets"},{"code":null,"e":4239,"s":4019,"text":"Apply different moving averages on the training set and, for each one, calculate the average return value after N days when the close price is over the moving average (we don’t consider short positions for this example)"},{"code":null,"e":4307,"s":4239,"text":"Choose the moving average length that maximizes such average return"},{"code":null,"e":4379,"s":4307,"text":"Use this moving average to calculate the average return on the test set"},{"code":null,"e":4502,"s":4379,"text":"Verify that the average return on the test set is statistically similar to the average return achieved on the training set"},{"code":null,"e":4717,"s":4502,"text":"The last point is the most important one because it performs cross-validation that helps us avoid overfitting after the optimization phase. If this check is satisfied, we can use the moving average length we found."},{"code":null,"e":4866,"s":4717,"text":"For this example, we’ll use different stocks and investment length. The statistical significance of the mean values will be done using Welch’s test."},{"code":null,"e":4959,"s":4866,"text":"First of all, we must install yfinance library. It’s very useful for downloading stock data."},{"code":null,"e":4981,"s":4959,"text":"!pip install yfinance"},{"code":null,"e":5022,"s":4981,"text":"Then we can import some useful packages:"},{"code":null,"e":5139,"s":5022,"text":"import yfinanceimport pandas as pdimport numpy as npimport matplotlib.pyplot as pltfrom scipy.stats import ttest_ind"},{"code":null,"e":5254,"s":5139,"text":"Let’s assume we want to keep the SPY ETF on S&P 500 index for 2 days and that we want to analyze 10 years of data."},{"code":null,"e":5328,"s":5254,"text":"n_forward = 2name = 'SPY'start_date = \"2010-01-01\"end_date = \"2020-06-15\""},{"code":null,"e":5396,"s":5328,"text":"Now we can download our data and calculate the return after 2 days."},{"code":null,"e":5626,"s":5396,"text":"ticker = yfinance.Ticker(name)data = ticker.history(interval=\"1d\",start=start_date,end=end_date)data['Forward Close'] = data['Close'].shift(-n_forward)data['Forward Return'] = (data['Forward Close'] - data['Close'])/data['Close']"},{"code":null,"e":6081,"s":5626,"text":"Now we can perform the optimization for searching the best moving average. We’ll do a for loop that spans among 20-period moving average and 500-period moving average. For each period we split our dataset in training and test sets, then we’ll look only ad those days when the close price is above the SMA and calculate the forward return. Finally, we’ll calculate the average forward return in training and test sets, comparing them using a Welch’s test."},{"code":null,"e":6866,"s":6081,"text":"result = []train_size = 0.6for sma_length in range(20,500): data['SMA'] = data['Close'].rolling(sma_length).mean() data['input'] = [int(x) for x in data['Close'] > data['SMA']] df = data.dropna() training = df.head(int(train_size * df.shape[0])) test = df.tail(int((1 - train_size) * df.shape[0])) tr_returns = training[training['input'] == 1]['Forward Return'] test_returns = test[test['input'] == 1]['Forward Return'] mean_forward_return_training = tr_returns.mean() mean_forward_return_test = test_returns.mean() pvalue = ttest_ind(tr_returns,test_returns,equal_var=False)[1] result.append({ 'sma_length':sma_length, 'training_forward_return': mean_forward_return_training, 'test_forward_return': mean_forward_return_test, 'p-value':pvalue })"},{"code":null,"e":6972,"s":6866,"text":"We’ll sort all the results by training average future returns in order to get the optimal moving average."},{"code":null,"e":7032,"s":6972,"text":"result.sort(key = lambda x : -x['training_forward_return'])"},{"code":null,"e":7078,"s":7032,"text":"The first item, which has the best score, is:"},{"code":null,"e":7277,"s":7078,"text":"As you can see, the p-value is higher than 5%, so we can assume that the average return in the test set is comparable with the average return in the training set, so we haven’t suffered overfitting."},{"code":null,"e":7394,"s":7277,"text":"Let’s see the price chart according to the best moving average we’ve found (which is the 479-period moving average)."},{"code":null,"e":7449,"s":7394,"text":"It’s clear that the price is very often above the SMA."},{"code":null,"e":7554,"s":7449,"text":"Now, let’s see what happens if we set n_forward = 40 (that is, we keep our position opened for 40 days)."},{"code":null,"e":7602,"s":7554,"text":"The best moving average produces these results:"},{"code":null,"e":7900,"s":7602,"text":"As you can see, the p-value is lower than 5%, so we can assume that the training phase has introduced some kind of overfitting, so we can’t use this SMA in the real world. Another reason could be that volatility has changed too much and the market needs to stabilize before making us invest in it."},{"code":null,"e":7993,"s":7900,"text":"Finally, let’s see what happens with a Gold-based ETF (ticker: GLD) with 40-days investment."},{"code":null,"e":8043,"s":7993,"text":"p-value is quite high, so there’s no overfitting."},{"code":null,"e":8116,"s":8043,"text":"The best moving average period is 136, as we can see in the chart below."},{"code":null,"e":8416,"s":8116,"text":"In this article, we’ve seen a simple algorithm to find the best Simple Moving Average for stock and ETF trading. It can be easily applied every trading day in order to find, day by day, the best moving average. In this way, a trader can easily adapt to market changes and to volatility fluctuations."},{"code":null,"e":8586,"s":8416,"text":"All the calculations shown in this article can be found on GitHub here: https://github.com/gianlucamalato/machinelearning/blob/master/Find_the_best_moving_average.ipynb."}],"string":"[\n {\n \"code\": null,\n \"e\": 386,\n \"s\": 172,\n \"text\": \"Moving averages are one of the most used tools in stock trading. Many traders actually use only this tool in their investment toolbox. Let’s see what they are and how we can use Python to fine-tune their features.\"\n },\n {\n \"code\": null,\n \"e\": 928,\n \"s\": 386,\n \"text\": \"In a time series, a moving average of period N at a certain time t, is the mean value of the N values before t (included). It’s defined for each time instant excluding the first N ones. In this particular case, we are talking about the Simple Moving Average (SMA) because every point of the average has the same weight. There are types of moving averages that weigh every point in a different way, giving more weight to the most recent data. It’s the case of the Exponential Moving Average (EMA) or the Linear Weighted Moving Average (LWMA).\"\n },\n {\n \"code\": null,\n \"e\": 1114,\n \"s\": 928,\n \"text\": \"In trading, the number of previous time series observations the average is calculated from is called period. So, an SMA with period 20 indicates a moving average of the last 20 periods.\"\n },\n {\n \"code\": null,\n \"e\": 1257,\n \"s\": 1114,\n \"text\": \"As you can see, SMA follows the time series and it’s useful to remove noise from the signal, keeping the relevant information about the trend.\"\n },\n {\n \"code\": null,\n \"e\": 1444,\n \"s\": 1257,\n \"text\": \"Moving averages are often used in time series analysis, for example in ARIMA models and, generally speaking, when we want to compare a time series value to the average value in the past.\"\n },\n {\n \"code\": null,\n \"e\": 1620,\n \"s\": 1444,\n \"text\": \"Moving averages are often used to detect a trend. It’s very common to assume that if the stock price is above its moving average, it will likely continue rising in an uptrend.\"\n },\n {\n \"code\": null,\n \"e\": 1704,\n \"s\": 1620,\n \"text\": \"The longer the period of an SMA, the longer the time horizon of the trend it spots.\"\n },\n {\n \"code\": null,\n \"e\": 1846,\n \"s\": 1704,\n \"text\": \"As you can see, short moving averages are useful to catch short-term movements, while the 200-period SMA is able to detect a long-term trend.\"\n },\n {\n \"code\": null,\n \"e\": 1908,\n \"s\": 1846,\n \"text\": \"Generally speaking, the most used SMA periods in trading are:\"\n },\n {\n \"code\": null,\n \"e\": 1929,\n \"s\": 1908,\n \"text\": \"20 for swing trading\"\n },\n {\n \"code\": null,\n \"e\": 1956,\n \"s\": 1929,\n \"text\": \"50 for medium-term trading\"\n },\n {\n \"code\": null,\n \"e\": 1982,\n \"s\": 1956,\n \"text\": \"200 for long-term trading\"\n },\n {\n \"code\": null,\n \"e\": 2209,\n \"s\": 1982,\n \"text\": \"It’s a general rule of thumb among traders that if a stock price is above its 200-days moving average, the trend is bullish (i.e. the price rises). So they are often looking for stocks whose price is above the 200-periods SMA.\"\n },\n {\n \"code\": null,\n \"e\": 2622,\n \"s\": 2209,\n \"text\": \"In order to find the best period of an SMA, we first need to know how long we are going to keep the stock in our portfolio. If we are swing traders, we may want to keep it for 5–10 business days. If we are position traders, maybe we must raise this threshold to 40–60 days. If we are portfolio traders and use moving averages as a technical filter in our stock screening plan, maybe we can focus on 200–300 days.\"\n },\n {\n \"code\": null,\n \"e\": 2844,\n \"s\": 2622,\n \"text\": \"Choosing the investment period is a discretionary choice of the trader. Once we have determined it, we must try to set a suitable SMA period. We have seen 20, 50 and 200 periods, but are they always good? Well not really.\"\n },\n {\n \"code\": null,\n \"e\": 3211,\n \"s\": 2844,\n \"text\": \"Markets change a lot during the time and they often make traders fine-tune their indicators and moving averages in order to follow volatility burst, black swans and so on. So there isn’t the right choice for the moving average period, but we can build a model that self-adapts to market changes and auto-adjust itself in order to find the best moving average period.\"\n },\n {\n \"code\": null,\n \"e\": 3571,\n \"s\": 3211,\n \"text\": \"The algorithm I propose here is an attempt to find the best moving average according to the investment period we choose. After we choose this period, we’ll try different moving averages length and find the one that maximizes the expected return of our investment (i.e. if we buy at 100 and after the chosen period the price rises to 105, we have a 5% return).\"\n },\n {\n \"code\": null,\n \"e\": 3876,\n \"s\": 3571,\n \"text\": \"The reason of using the average return after N days as an objective function is pretty simple: we want our moving average to give us the best prediction of the trend according to the time we want to keep stocks in our portfolio, so we want to maximize the average return of our investment in such a time.\"\n },\n {\n \"code\": null,\n \"e\": 3913,\n \"s\": 3876,\n \"text\": \"In practice, we’ll do the following:\"\n },\n {\n \"code\": null,\n \"e\": 3972,\n \"s\": 3913,\n \"text\": \"Take some years of daily data of our stock (e.g. 10 years)\"\n },\n {\n \"code\": null,\n \"e\": 4019,\n \"s\": 3972,\n \"text\": \"Split this dataset into training and test sets\"\n },\n {\n \"code\": null,\n \"e\": 4239,\n \"s\": 4019,\n \"text\": \"Apply different moving averages on the training set and, for each one, calculate the average return value after N days when the close price is over the moving average (we don’t consider short positions for this example)\"\n },\n {\n \"code\": null,\n \"e\": 4307,\n \"s\": 4239,\n \"text\": \"Choose the moving average length that maximizes such average return\"\n },\n {\n \"code\": null,\n \"e\": 4379,\n \"s\": 4307,\n \"text\": \"Use this moving average to calculate the average return on the test set\"\n },\n {\n \"code\": null,\n \"e\": 4502,\n \"s\": 4379,\n \"text\": \"Verify that the average return on the test set is statistically similar to the average return achieved on the training set\"\n },\n {\n \"code\": null,\n \"e\": 4717,\n \"s\": 4502,\n \"text\": \"The last point is the most important one because it performs cross-validation that helps us avoid overfitting after the optimization phase. If this check is satisfied, we can use the moving average length we found.\"\n },\n {\n \"code\": null,\n \"e\": 4866,\n \"s\": 4717,\n \"text\": \"For this example, we’ll use different stocks and investment length. The statistical significance of the mean values will be done using Welch’s test.\"\n },\n {\n \"code\": null,\n \"e\": 4959,\n \"s\": 4866,\n \"text\": \"First of all, we must install yfinance library. It’s very useful for downloading stock data.\"\n },\n {\n \"code\": null,\n \"e\": 4981,\n \"s\": 4959,\n \"text\": \"!pip install yfinance\"\n },\n {\n \"code\": null,\n \"e\": 5022,\n \"s\": 4981,\n \"text\": \"Then we can import some useful packages:\"\n },\n {\n \"code\": null,\n \"e\": 5139,\n \"s\": 5022,\n \"text\": \"import yfinanceimport pandas as pdimport numpy as npimport matplotlib.pyplot as pltfrom scipy.stats import ttest_ind\"\n },\n {\n \"code\": null,\n \"e\": 5254,\n \"s\": 5139,\n \"text\": \"Let’s assume we want to keep the SPY ETF on S&P 500 index for 2 days and that we want to analyze 10 years of data.\"\n },\n {\n \"code\": null,\n \"e\": 5328,\n \"s\": 5254,\n \"text\": \"n_forward = 2name = 'SPY'start_date = \\\"2010-01-01\\\"end_date = \\\"2020-06-15\\\"\"\n },\n {\n \"code\": null,\n \"e\": 5396,\n \"s\": 5328,\n \"text\": \"Now we can download our data and calculate the return after 2 days.\"\n },\n {\n \"code\": null,\n \"e\": 5626,\n \"s\": 5396,\n \"text\": \"ticker = yfinance.Ticker(name)data = ticker.history(interval=\\\"1d\\\",start=start_date,end=end_date)data['Forward Close'] = data['Close'].shift(-n_forward)data['Forward Return'] = (data['Forward Close'] - data['Close'])/data['Close']\"\n },\n {\n \"code\": null,\n \"e\": 6081,\n \"s\": 5626,\n \"text\": \"Now we can perform the optimization for searching the best moving average. We’ll do a for loop that spans among 20-period moving average and 500-period moving average. For each period we split our dataset in training and test sets, then we’ll look only ad those days when the close price is above the SMA and calculate the forward return. Finally, we’ll calculate the average forward return in training and test sets, comparing them using a Welch’s test.\"\n },\n {\n \"code\": null,\n \"e\": 6866,\n \"s\": 6081,\n \"text\": \"result = []train_size = 0.6for sma_length in range(20,500): data['SMA'] = data['Close'].rolling(sma_length).mean() data['input'] = [int(x) for x in data['Close'] > data['SMA']] df = data.dropna() training = df.head(int(train_size * df.shape[0])) test = df.tail(int((1 - train_size) * df.shape[0])) tr_returns = training[training['input'] == 1]['Forward Return'] test_returns = test[test['input'] == 1]['Forward Return'] mean_forward_return_training = tr_returns.mean() mean_forward_return_test = test_returns.mean() pvalue = ttest_ind(tr_returns,test_returns,equal_var=False)[1] result.append({ 'sma_length':sma_length, 'training_forward_return': mean_forward_return_training, 'test_forward_return': mean_forward_return_test, 'p-value':pvalue })\"\n },\n {\n \"code\": null,\n \"e\": 6972,\n \"s\": 6866,\n \"text\": \"We’ll sort all the results by training average future returns in order to get the optimal moving average.\"\n },\n {\n \"code\": null,\n \"e\": 7032,\n \"s\": 6972,\n \"text\": \"result.sort(key = lambda x : -x['training_forward_return'])\"\n },\n {\n \"code\": null,\n \"e\": 7078,\n \"s\": 7032,\n \"text\": \"The first item, which has the best score, is:\"\n },\n {\n \"code\": null,\n \"e\": 7277,\n \"s\": 7078,\n \"text\": \"As you can see, the p-value is higher than 5%, so we can assume that the average return in the test set is comparable with the average return in the training set, so we haven’t suffered overfitting.\"\n },\n {\n \"code\": null,\n \"e\": 7394,\n \"s\": 7277,\n \"text\": \"Let’s see the price chart according to the best moving average we’ve found (which is the 479-period moving average).\"\n },\n {\n \"code\": null,\n \"e\": 7449,\n \"s\": 7394,\n \"text\": \"It’s clear that the price is very often above the SMA.\"\n },\n {\n \"code\": null,\n \"e\": 7554,\n \"s\": 7449,\n \"text\": \"Now, let’s see what happens if we set n_forward = 40 (that is, we keep our position opened for 40 days).\"\n },\n {\n \"code\": null,\n \"e\": 7602,\n \"s\": 7554,\n \"text\": \"The best moving average produces these results:\"\n },\n {\n \"code\": null,\n \"e\": 7900,\n \"s\": 7602,\n \"text\": \"As you can see, the p-value is lower than 5%, so we can assume that the training phase has introduced some kind of overfitting, so we can’t use this SMA in the real world. Another reason could be that volatility has changed too much and the market needs to stabilize before making us invest in it.\"\n },\n {\n \"code\": null,\n \"e\": 7993,\n \"s\": 7900,\n \"text\": \"Finally, let’s see what happens with a Gold-based ETF (ticker: GLD) with 40-days investment.\"\n },\n {\n \"code\": null,\n \"e\": 8043,\n \"s\": 7993,\n \"text\": \"p-value is quite high, so there’s no overfitting.\"\n },\n {\n \"code\": null,\n \"e\": 8116,\n \"s\": 8043,\n \"text\": \"The best moving average period is 136, as we can see in the chart below.\"\n },\n {\n \"code\": null,\n \"e\": 8416,\n \"s\": 8116,\n \"text\": \"In this article, we’ve seen a simple algorithm to find the best Simple Moving Average for stock and ETF trading. It can be easily applied every trading day in order to find, day by day, the best moving average. In this way, a trader can easily adapt to market changes and to volatility fluctuations.\"\n },\n {\n \"code\": null,\n \"e\": 8586,\n \"s\": 8416,\n \"text\": \"All the calculations shown in this article can be found on GitHub here: https://github.com/gianlucamalato/machinelearning/blob/master/Find_the_best_moving_average.ipynb.\"\n }\n]"}}},{"rowIdx":517,"cells":{"title":{"kind":"string","value":"du Command in LINUX - GeeksforGeeks"},"text":{"kind":"string","value":"15 May, 2019\nWhile working on LINUX, there might come a situation when you want to transfer a set of files or the entire directory. In such a case, you might wanna know the disk space consumed by that particular directory or set of files. As you are dealing with LINUX, there exists a command line utility for this also which is du command that estimates and displays the disk space used by files.\nSo, in simple words du command-line utility helps you to find out the disk usage of set of files or a directory.\nHere’s the syntax of du command :\n//syntax of du command\n\ndu [OPTION]... [FILE]...\n or\ndu [OPTION]... --files0-from=F\n\nwhere OPTION refers to the options compatible with du command and FILE refers to the filename of which you wanna know the disk space occupied.\nUsing du command\nSuppose there are two files say kt.txt and pt.txt and you want to know the disk usage of these files, then you can simply use du command by specifying the file names along with it as:\n//using du command\n\n$du kt.txt pt.txt\n8 kt.txt\n4 pt.txt\n\n/* the first column \ndisplayed the file's\ndisk usage */\n\nSo, as shown above du displayed the disk space used by the corresponding files.\nNow, the displayed values are actually in the units of the first available SIZE from – -block-size, and the DU_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables and if not in this format then units are default to 1024 bytes (or 512 if POSIXLY_CORRECT is set).\nDon’t get puzzled from the above paragraph. We can simply use -h option to force du to produce the output in the human readable format.\nOptions for du command\n-a, – -all option : This option produces counts as output for all files, not for just directories.\n– -apparent-size option : This prints the apparent sizes for the files and not the disk usage which can be larger due to holes in files (sparse), internal fragmentation and indirect blocks but in real the apparent size is smaller.\n-c, – -total option : This displays a grand total.\n-B, – -block-size=SIZE option : This option causes the size to scale by SIZE like -BM prints the size in Megabytes.\n-b, – -bytes option : This option is equivalent to – -apparent-size – -block-size=1.\n-D, – -dereference-args option : This option is used to dereference only the symbolic links listed on the command line.\n-H option : This option is equivalent to the above -D option.\n– -files0-from=F option : This is used to summarize disk usage of the NUL-terminated file names specified in the file F and if the file F is “-” then read names from the standard input.\n-h, – -human-readable option : This prints the sizes in human readable format i.e in rounding values and using abbreviations like 1 K and this is the most often used option with du.\n– -si option: This is much similar to the -h option but uses power of 1000 and not of 1024.\n-k option : its equivalent to – -block-size=1K.\n-l, – -count-links option : This count sizes many times if files are hard-linked.\n-m option : This is equivalent to – – block-size=1M.\n-L, – -dereference option : This option dereferences all symbolic links.\n-P, – -no-dereference option : This option tells du not to follow any symbolic links which is by default setting.\n-0, –null option : This ends each output line with 0 byte rather than a newline.\n-S, – -separate-dirs option : This causes the output not to include the size of subdirectories.\n-s, – -summarize option : This option will allow to display a total only for each argument.\n-x, – -one-file-system option : This will cause du to skip directories on different file systems.\n-X, – -exclude-from=FILE option : Exclude files that match any pattern given in FILE.\n– -exclude=PATTERN option : It will exclude files that match PATTERN.\n-d, – -max-depth=N option : Print the total for a directory (or file, with –all) only if it is N or fewer levels below the command line argument; –max-depth=0 is the same as –summarize.\n– -time option : This will show the time of the last modification of any file in the directory, or any of its subdirectories.\n– -time=WORD option : This shows time as WORD instead of modification time :atime, access, use, ctime or status.\n– -time-style=STYLE option : this shows time using STYLE: full-iso, long-iso, iso, or +FORMAT (FORMAT is interpreted like the format of date).\n– -help option : This will display a help message and exit.\n– -version option : This will display version info and exit.\nExamples of using du command\n1. Using -h option : As mentioned above, -h option is used to produce the output in human readable format.\n//using -h with du\n\n$du -h kt.txt pt.txt\n8.0K kt.txt\n4.0K pt.txt\n\n/*now the output\nis in human readable\nformat i.e in\nKilobytes */\n\n2. Using du to show disk usage of a directory : Now, if you will pass a directory name say kartik as an argument to du it will show the disk usage info of the input directory kartik and its sub-directories (if any).\n/*using du to display disk usage \nof a directory and its\nsub-directories */\n\n$du kartik\n4 kartik/thakral\n24 kartik\n\n\nAbove the disk usage info of the directory kartik and its sub-directory thakral is displayed.\n3. Using -a option : now, as seen above only the disk usage info of directorykartik and its sub-directory thakral is displayed but what if you also want to know the disk usage info of all the files present under the directory kartik. For this, use -a option.\n//using -a with du\n\n$du -a kartik\n8 kartik/kt.txt\n4 kartik/pt.txt\n4 kartik/pranjal.png\n4 kartik/thakral.png\n4 kartik/thakral\n24 kartik\n\n/*so with -a option used\nall the files (under directory\nkartik) disk usage info is\ndisplayed along with the \nthakral sub-directory */\n\n4. Using -c option : This option displays the grand total as shown.\n//using -c with du\n\n$du -c -h kt.txt pt.txt\n8.0K kt.txt\n4.0K pt.txt\n12.0K total\n\n/* at the end\ntotal is displayed \nfor the disk usage */\n\n5. Using – -time option : This option is used to display the last modification time in the output of du.\n//using --time with du\n\n$du --time kt.txt\n4 2017-11-18 16:00 kt.txt\n\n/*so the last\nmodification date and\ntime gets displayed\nwhen --time \noption is used */\n\n6. Using – -exclude=PATTERN option : In one of the example above, all the files disk usage related info was displayed of directory kartik. Now, suppose you want to know the info of .txt files only and not of .png files, in that case to exclude the .png pattern you can use this option.\n//using --exclude=PATTERN with du\n\n$du --exclude=*.png -a kartik\n8 kartik/kt.txt\n4 kartik/pt.txt\n4 kartik/thakral\n24 kartik\n\n/*so, in this case\n.png files info are\nexcluded from the output */\n\n7. Using – -max-depth=N option : Now, this option allows you to limit the output of du to a particular depth of a directory.Suppose you have a directory named FRIENDS under which you have sub-directories as FRIENDS/college and FRIENDS/school and also under sub-directory college you have another sub-directory as FRIENDS/college/farewell then you can use – -max-depth=N option in this case as:\n//using --max-depth=N with du\n\n$du --max-depth=0 FRIENDS\n24 FRIENDS\n\n\n/* in this case you \nrestricted du output\nonly to top=level\ndirectory */\n\nNow, for sub-directories college and school you can use :\n$du --max-depth=1 FRIENDS\n16 FRIENDS/college\n8 FRIENDS/school\n24 FRIENDS\n\n\nNow, for FRIENDS/college/farewell you can use –max-depth=2 as:\n$du --max-depth=2 FRIENDS\n4 FRIENDS/college/farewell\n16 FRIENDS/college\n8 FRIENDS/school\n24 FRIENDS\n\n/*so this is how N\nin --max-depth=N \nis used for levels */\n\n8. Using – -files0-from=F option : As mentioned above, this is used to summarize disk usage of the NUL-terminated file names specified in the file F and if the file F is “-” then read names from the standard input.Let’s use this option for taking input from STDIN as:\n//using --files0from=F with du\n\n$pwd\n/home/kartik\n\n$ls\nkt.txt pt.txt thakral\n\n/*now use this option for \ntaking input from\nSTDIN */\n\n$du --files0-from=-\nkt.txt8 kt.txt\npt.txt4 pt.txt\n\n/* in this case after \ngiving kt.txt as a input\nfrom STDIN there is need to\npress Ctrl+D twice then the\noutput is shown and same for\npt.txt or any other file name\ngiven from STDIN */\n\n\nApplications of du command\nIt can be used to find out the disk space occupied by a particular directory in case of transferring files from one computer to another.\ndu command can be linked with pipes to filters.A filter is usually a specialized program that transforms the data in a meaningful way.\nThere also exists some other ways like df command to find the disk usage but they all lack du ability to show the disk usage of individual directories and files.\nIt can also be used to find out quickly the number of sub-directories present in a directory.\nExample of using du with filters\nLet’s take a simple example of using du with sort command so that the output produced by du will be sorted in the increasing order of size of files.\n\n$du -a kartik\n8 kartik/kt.txt\n4 kartik/pt.txt\n4 kartik/pranjal.png\n4 kartik/thakral.png\n4 kartik/thakral\n24 kartik\n\n/*now using du to produce\nsorted output */\n\n$du -a kartik | sort -n\n4 kartik/pt.txt\n4 kartik/pranjal.png\n4 kartik/thakral.png\n4 kartik/thakral\n8 kartik/kt.txt\n24 kartik\n\n/* now the output displayed\nis sorted according to the size */\n\nThe sort command along with -n option used causes to list the output in numeric order with the file with the smallest size appearing first.In this way du can be used to arrange the output according to the size.\nThat’s all about du command.\nlinux-command\nLinux-file-commands\nLinux-Unix\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nComments\nOld Comments\nTCP Server-Client implementation in C\nZIP command in Linux with examples\nConditional Statements | Shell Script\ntar command in Linux with examples\nUDP Server-Client implementation in C\ncurl command in Linux with Examples\nCat command in Linux with examples\necho command in Linux with Examples\nMutex lock for Linux Thread Synchronization\nThread functions in C/C++"},"parsed":{"kind":"list like","value":[{"code":null,"e":23950,"s":23922,"text":"\n15 May, 2019"},{"code":null,"e":24335,"s":23950,"text":"While working on LINUX, there might come a situation when you want to transfer a set of files or the entire directory. In such a case, you might wanna know the disk space consumed by that particular directory or set of files. As you are dealing with LINUX, there exists a command line utility for this also which is du command that estimates and displays the disk space used by files."},{"code":null,"e":24448,"s":24335,"text":"So, in simple words du command-line utility helps you to find out the disk usage of set of files or a directory."},{"code":null,"e":24482,"s":24448,"text":"Here’s the syntax of du command :"},{"code":null,"e":24573,"s":24482,"text":"//syntax of du command\n\ndu [OPTION]... [FILE]...\n or\ndu [OPTION]... --files0-from=F\n"},{"code":null,"e":24716,"s":24573,"text":"where OPTION refers to the options compatible with du command and FILE refers to the filename of which you wanna know the disk space occupied."},{"code":null,"e":24733,"s":24716,"text":"Using du command"},{"code":null,"e":24917,"s":24733,"text":"Suppose there are two files say kt.txt and pt.txt and you want to know the disk usage of these files, then you can simply use du command by specifying the file names along with it as:"},{"code":null,"e":25043,"s":24917,"text":"//using du command\n\n$du kt.txt pt.txt\n8 kt.txt\n4 pt.txt\n\n/* the first column \ndisplayed the file's\ndisk usage */\n"},{"code":null,"e":25123,"s":25043,"text":"So, as shown above du displayed the disk space used by the corresponding files."},{"code":null,"e":25392,"s":25123,"text":"Now, the displayed values are actually in the units of the first available SIZE from – -block-size, and the DU_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables and if not in this format then units are default to 1024 bytes (or 512 if POSIXLY_CORRECT is set)."},{"code":null,"e":25528,"s":25392,"text":"Don’t get puzzled from the above paragraph. We can simply use -h option to force du to produce the output in the human readable format."},{"code":null,"e":25551,"s":25528,"text":"Options for du command"},{"code":null,"e":25650,"s":25551,"text":"-a, – -all option : This option produces counts as output for all files, not for just directories."},{"code":null,"e":25881,"s":25650,"text":"– -apparent-size option : This prints the apparent sizes for the files and not the disk usage which can be larger due to holes in files (sparse), internal fragmentation and indirect blocks but in real the apparent size is smaller."},{"code":null,"e":25932,"s":25881,"text":"-c, – -total option : This displays a grand total."},{"code":null,"e":26048,"s":25932,"text":"-B, – -block-size=SIZE option : This option causes the size to scale by SIZE like -BM prints the size in Megabytes."},{"code":null,"e":26133,"s":26048,"text":"-b, – -bytes option : This option is equivalent to – -apparent-size – -block-size=1."},{"code":null,"e":26253,"s":26133,"text":"-D, – -dereference-args option : This option is used to dereference only the symbolic links listed on the command line."},{"code":null,"e":26315,"s":26253,"text":"-H option : This option is equivalent to the above -D option."},{"code":null,"e":26501,"s":26315,"text":"– -files0-from=F option : This is used to summarize disk usage of the NUL-terminated file names specified in the file F and if the file F is “-” then read names from the standard input."},{"code":null,"e":26683,"s":26501,"text":"-h, – -human-readable option : This prints the sizes in human readable format i.e in rounding values and using abbreviations like 1 K and this is the most often used option with du."},{"code":null,"e":26775,"s":26683,"text":"– -si option: This is much similar to the -h option but uses power of 1000 and not of 1024."},{"code":null,"e":26823,"s":26775,"text":"-k option : its equivalent to – -block-size=1K."},{"code":null,"e":26905,"s":26823,"text":"-l, – -count-links option : This count sizes many times if files are hard-linked."},{"code":null,"e":26958,"s":26905,"text":"-m option : This is equivalent to – – block-size=1M."},{"code":null,"e":27031,"s":26958,"text":"-L, – -dereference option : This option dereferences all symbolic links."},{"code":null,"e":27145,"s":27031,"text":"-P, – -no-dereference option : This option tells du not to follow any symbolic links which is by default setting."},{"code":null,"e":27226,"s":27145,"text":"-0, –null option : This ends each output line with 0 byte rather than a newline."},{"code":null,"e":27322,"s":27226,"text":"-S, – -separate-dirs option : This causes the output not to include the size of subdirectories."},{"code":null,"e":27414,"s":27322,"text":"-s, – -summarize option : This option will allow to display a total only for each argument."},{"code":null,"e":27512,"s":27414,"text":"-x, – -one-file-system option : This will cause du to skip directories on different file systems."},{"code":null,"e":27598,"s":27512,"text":"-X, – -exclude-from=FILE option : Exclude files that match any pattern given in FILE."},{"code":null,"e":27668,"s":27598,"text":"– -exclude=PATTERN option : It will exclude files that match PATTERN."},{"code":null,"e":27854,"s":27668,"text":"-d, – -max-depth=N option : Print the total for a directory (or file, with –all) only if it is N or fewer levels below the command line argument; –max-depth=0 is the same as –summarize."},{"code":null,"e":27980,"s":27854,"text":"– -time option : This will show the time of the last modification of any file in the directory, or any of its subdirectories."},{"code":null,"e":28093,"s":27980,"text":"– -time=WORD option : This shows time as WORD instead of modification time :atime, access, use, ctime or status."},{"code":null,"e":28236,"s":28093,"text":"– -time-style=STYLE option : this shows time using STYLE: full-iso, long-iso, iso, or +FORMAT (FORMAT is interpreted like the format of date)."},{"code":null,"e":28296,"s":28236,"text":"– -help option : This will display a help message and exit."},{"code":null,"e":28357,"s":28296,"text":"– -version option : This will display version info and exit."},{"code":null,"e":28386,"s":28357,"text":"Examples of using du command"},{"code":null,"e":28493,"s":28386,"text":"1. Using -h option : As mentioned above, -h option is used to produce the output in human readable format."},{"code":null,"e":28631,"s":28493,"text":"//using -h with du\n\n$du -h kt.txt pt.txt\n8.0K kt.txt\n4.0K pt.txt\n\n/*now the output\nis in human readable\nformat i.e in\nKilobytes */\n"},{"code":null,"e":28847,"s":28631,"text":"2. Using du to show disk usage of a directory : Now, if you will pass a directory name say kartik as an argument to du it will show the disk usage info of the input directory kartik and its sub-directories (if any)."},{"code":null,"e":28975,"s":28847,"text":"/*using du to display disk usage \nof a directory and its\nsub-directories */\n\n$du kartik\n4 kartik/thakral\n24 kartik\n\n"},{"code":null,"e":29069,"s":28975,"text":"Above the disk usage info of the directory kartik and its sub-directory thakral is displayed."},{"code":null,"e":29328,"s":29069,"text":"3. Using -a option : now, as seen above only the disk usage info of directorykartik and its sub-directory thakral is displayed but what if you also want to know the disk usage info of all the files present under the directory kartik. For this, use -a option."},{"code":null,"e":29634,"s":29328,"text":"//using -a with du\n\n$du -a kartik\n8 kartik/kt.txt\n4 kartik/pt.txt\n4 kartik/pranjal.png\n4 kartik/thakral.png\n4 kartik/thakral\n24 kartik\n\n/*so with -a option used\nall the files (under directory\nkartik) disk usage info is\ndisplayed along with the \nthakral sub-directory */\n"},{"code":null,"e":29702,"s":29634,"text":"4. Using -c option : This option displays the grand total as shown."},{"code":null,"e":29848,"s":29702,"text":"//using -c with du\n\n$du -c -h kt.txt pt.txt\n8.0K kt.txt\n4.0K pt.txt\n12.0K total\n\n/* at the end\ntotal is displayed \nfor the disk usage */\n"},{"code":null,"e":29953,"s":29848,"text":"5. Using – -time option : This option is used to display the last modification time in the output of du."},{"code":null,"e":30122,"s":29953,"text":"//using --time with du\n\n$du --time kt.txt\n4 2017-11-18 16:00 kt.txt\n\n/*so the last\nmodification date and\ntime gets displayed\nwhen --time \noption is used */\n"},{"code":null,"e":30408,"s":30122,"text":"6. Using – -exclude=PATTERN option : In one of the example above, all the files disk usage related info was displayed of directory kartik. Now, suppose you want to know the info of .txt files only and not of .png files, in that case to exclude the .png pattern you can use this option."},{"code":null,"e":30624,"s":30408,"text":"//using --exclude=PATTERN with du\n\n$du --exclude=*.png -a kartik\n8 kartik/kt.txt\n4 kartik/pt.txt\n4 kartik/thakral\n24 kartik\n\n/*so, in this case\n.png files info are\nexcluded from the output */\n"},{"code":null,"e":31018,"s":30624,"text":"7. Using – -max-depth=N option : Now, this option allows you to limit the output of du to a particular depth of a directory.Suppose you have a directory named FRIENDS under which you have sub-directories as FRIENDS/college and FRIENDS/school and also under sub-directory college you have another sub-directory as FRIENDS/college/farewell then you can use – -max-depth=N option in this case as:"},{"code":null,"e":31168,"s":31018,"text":"//using --max-depth=N with du\n\n$du --max-depth=0 FRIENDS\n24 FRIENDS\n\n\n/* in this case you \nrestricted du output\nonly to top=level\ndirectory */\n"},{"code":null,"e":31226,"s":31168,"text":"Now, for sub-directories college and school you can use :"},{"code":null,"e":31317,"s":31226,"text":"$du --max-depth=1 FRIENDS\n16 FRIENDS/college\n8 FRIENDS/school\n24 FRIENDS\n\n"},{"code":null,"e":31380,"s":31317,"text":"Now, for FRIENDS/college/farewell you can use –max-depth=2 as:"},{"code":null,"e":31563,"s":31380,"text":"$du --max-depth=2 FRIENDS\n4 FRIENDS/college/farewell\n16 FRIENDS/college\n8 FRIENDS/school\n24 FRIENDS\n\n/*so this is how N\nin --max-depth=N \nis used for levels */\n"},{"code":null,"e":31831,"s":31563,"text":"8. Using – -files0-from=F option : As mentioned above, this is used to summarize disk usage of the NUL-terminated file names specified in the file F and if the file F is “-” then read names from the standard input.Let’s use this option for taking input from STDIN as:"},{"code":null,"e":32200,"s":31831,"text":"//using --files0from=F with du\n\n$pwd\n/home/kartik\n\n$ls\nkt.txt pt.txt thakral\n\n/*now use this option for \ntaking input from\nSTDIN */\n\n$du --files0-from=-\nkt.txt8 kt.txt\npt.txt4 pt.txt\n\n/* in this case after \ngiving kt.txt as a input\nfrom STDIN there is need to\npress Ctrl+D twice then the\noutput is shown and same for\npt.txt or any other file name\ngiven from STDIN */\n\n"},{"code":null,"e":32227,"s":32200,"text":"Applications of du command"},{"code":null,"e":32364,"s":32227,"text":"It can be used to find out the disk space occupied by a particular directory in case of transferring files from one computer to another."},{"code":null,"e":32499,"s":32364,"text":"du command can be linked with pipes to filters.A filter is usually a specialized program that transforms the data in a meaningful way."},{"code":null,"e":32661,"s":32499,"text":"There also exists some other ways like df command to find the disk usage but they all lack du ability to show the disk usage of individual directories and files."},{"code":null,"e":32755,"s":32661,"text":"It can also be used to find out quickly the number of sub-directories present in a directory."},{"code":null,"e":32788,"s":32755,"text":"Example of using du with filters"},{"code":null,"e":32937,"s":32788,"text":"Let’s take a simple example of using du with sort command so that the output produced by du will be sorted in the increasing order of size of files."},{"code":null,"e":33358,"s":32937,"text":"\n$du -a kartik\n8 kartik/kt.txt\n4 kartik/pt.txt\n4 kartik/pranjal.png\n4 kartik/thakral.png\n4 kartik/thakral\n24 kartik\n\n/*now using du to produce\nsorted output */\n\n$du -a kartik | sort -n\n4 kartik/pt.txt\n4 kartik/pranjal.png\n4 kartik/thakral.png\n4 kartik/thakral\n8 kartik/kt.txt\n24 kartik\n\n/* now the output displayed\nis sorted according to the size */\n"},{"code":null,"e":33569,"s":33358,"text":"The sort command along with -n option used causes to list the output in numeric order with the file with the smallest size appearing first.In this way du can be used to arrange the output according to the size."},{"code":null,"e":33598,"s":33569,"text":"That’s all about du command."},{"code":null,"e":33612,"s":33598,"text":"linux-command"},{"code":null,"e":33632,"s":33612,"text":"Linux-file-commands"},{"code":null,"e":33643,"s":33632,"text":"Linux-Unix"},{"code":null,"e":33741,"s":33643,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":33750,"s":33741,"text":"Comments"},{"code":null,"e":33763,"s":33750,"text":"Old Comments"},{"code":null,"e":33801,"s":33763,"text":"TCP Server-Client implementation in C"},{"code":null,"e":33836,"s":33801,"text":"ZIP command in Linux with examples"},{"code":null,"e":33874,"s":33836,"text":"Conditional Statements | Shell Script"},{"code":null,"e":33909,"s":33874,"text":"tar command in Linux with examples"},{"code":null,"e":33947,"s":33909,"text":"UDP Server-Client implementation in C"},{"code":null,"e":33983,"s":33947,"text":"curl command in Linux with Examples"},{"code":null,"e":34018,"s":33983,"text":"Cat command in Linux with examples"},{"code":null,"e":34054,"s":34018,"text":"echo command in Linux with Examples"},{"code":null,"e":34098,"s":34054,"text":"Mutex lock for Linux Thread Synchronization"}],"string":"[\n {\n \"code\": null,\n \"e\": 23950,\n \"s\": 23922,\n \"text\": \"\\n15 May, 2019\"\n },\n {\n \"code\": null,\n \"e\": 24335,\n \"s\": 23950,\n \"text\": \"While working on LINUX, there might come a situation when you want to transfer a set of files or the entire directory. In such a case, you might wanna know the disk space consumed by that particular directory or set of files. As you are dealing with LINUX, there exists a command line utility for this also which is du command that estimates and displays the disk space used by files.\"\n },\n {\n \"code\": null,\n \"e\": 24448,\n \"s\": 24335,\n \"text\": \"So, in simple words du command-line utility helps you to find out the disk usage of set of files or a directory.\"\n },\n {\n \"code\": null,\n \"e\": 24482,\n \"s\": 24448,\n \"text\": \"Here’s the syntax of du command :\"\n },\n {\n \"code\": null,\n \"e\": 24573,\n \"s\": 24482,\n \"text\": \"//syntax of du command\\n\\ndu [OPTION]... [FILE]...\\n or\\ndu [OPTION]... --files0-from=F\\n\"\n },\n {\n \"code\": null,\n \"e\": 24716,\n \"s\": 24573,\n \"text\": \"where OPTION refers to the options compatible with du command and FILE refers to the filename of which you wanna know the disk space occupied.\"\n },\n {\n \"code\": null,\n \"e\": 24733,\n \"s\": 24716,\n \"text\": \"Using du command\"\n },\n {\n \"code\": null,\n \"e\": 24917,\n \"s\": 24733,\n \"text\": \"Suppose there are two files say kt.txt and pt.txt and you want to know the disk usage of these files, then you can simply use du command by specifying the file names along with it as:\"\n },\n {\n \"code\": null,\n \"e\": 25043,\n \"s\": 24917,\n \"text\": \"//using du command\\n\\n$du kt.txt pt.txt\\n8 kt.txt\\n4 pt.txt\\n\\n/* the first column \\ndisplayed the file's\\ndisk usage */\\n\"\n },\n {\n \"code\": null,\n \"e\": 25123,\n \"s\": 25043,\n \"text\": \"So, as shown above du displayed the disk space used by the corresponding files.\"\n },\n {\n \"code\": null,\n \"e\": 25392,\n \"s\": 25123,\n \"text\": \"Now, the displayed values are actually in the units of the first available SIZE from – -block-size, and the DU_BLOCK_SIZE, BLOCK_SIZE and BLOCKSIZE environment variables and if not in this format then units are default to 1024 bytes (or 512 if POSIXLY_CORRECT is set).\"\n },\n {\n \"code\": null,\n \"e\": 25528,\n \"s\": 25392,\n \"text\": \"Don’t get puzzled from the above paragraph. We can simply use -h option to force du to produce the output in the human readable format.\"\n },\n {\n \"code\": null,\n \"e\": 25551,\n \"s\": 25528,\n \"text\": \"Options for du command\"\n },\n {\n \"code\": null,\n \"e\": 25650,\n \"s\": 25551,\n \"text\": \"-a, – -all option : This option produces counts as output for all files, not for just directories.\"\n },\n {\n \"code\": null,\n \"e\": 25881,\n \"s\": 25650,\n \"text\": \"– -apparent-size option : This prints the apparent sizes for the files and not the disk usage which can be larger due to holes in files (sparse), internal fragmentation and indirect blocks but in real the apparent size is smaller.\"\n },\n {\n \"code\": null,\n \"e\": 25932,\n \"s\": 25881,\n \"text\": \"-c, – -total option : This displays a grand total.\"\n },\n {\n \"code\": null,\n \"e\": 26048,\n \"s\": 25932,\n \"text\": \"-B, – -block-size=SIZE option : This option causes the size to scale by SIZE like -BM prints the size in Megabytes.\"\n },\n {\n \"code\": null,\n \"e\": 26133,\n \"s\": 26048,\n \"text\": \"-b, – -bytes option : This option is equivalent to – -apparent-size – -block-size=1.\"\n },\n {\n \"code\": null,\n \"e\": 26253,\n \"s\": 26133,\n \"text\": \"-D, – -dereference-args option : This option is used to dereference only the symbolic links listed on the command line.\"\n },\n {\n \"code\": null,\n \"e\": 26315,\n \"s\": 26253,\n \"text\": \"-H option : This option is equivalent to the above -D option.\"\n },\n {\n \"code\": null,\n \"e\": 26501,\n \"s\": 26315,\n \"text\": \"– -files0-from=F option : This is used to summarize disk usage of the NUL-terminated file names specified in the file F and if the file F is “-” then read names from the standard input.\"\n },\n {\n \"code\": null,\n \"e\": 26683,\n \"s\": 26501,\n \"text\": \"-h, – -human-readable option : This prints the sizes in human readable format i.e in rounding values and using abbreviations like 1 K and this is the most often used option with du.\"\n },\n {\n \"code\": null,\n \"e\": 26775,\n \"s\": 26683,\n \"text\": \"– -si option: This is much similar to the -h option but uses power of 1000 and not of 1024.\"\n },\n {\n \"code\": null,\n \"e\": 26823,\n \"s\": 26775,\n \"text\": \"-k option : its equivalent to – -block-size=1K.\"\n },\n {\n \"code\": null,\n \"e\": 26905,\n \"s\": 26823,\n \"text\": \"-l, – -count-links option : This count sizes many times if files are hard-linked.\"\n },\n {\n \"code\": null,\n \"e\": 26958,\n \"s\": 26905,\n \"text\": \"-m option : This is equivalent to – – block-size=1M.\"\n },\n {\n \"code\": null,\n \"e\": 27031,\n \"s\": 26958,\n \"text\": \"-L, – -dereference option : This option dereferences all symbolic links.\"\n },\n {\n \"code\": null,\n \"e\": 27145,\n \"s\": 27031,\n \"text\": \"-P, – -no-dereference option : This option tells du not to follow any symbolic links which is by default setting.\"\n },\n {\n \"code\": null,\n \"e\": 27226,\n \"s\": 27145,\n \"text\": \"-0, –null option : This ends each output line with 0 byte rather than a newline.\"\n },\n {\n \"code\": null,\n \"e\": 27322,\n \"s\": 27226,\n \"text\": \"-S, – -separate-dirs option : This causes the output not to include the size of subdirectories.\"\n },\n {\n \"code\": null,\n \"e\": 27414,\n \"s\": 27322,\n \"text\": \"-s, – -summarize option : This option will allow to display a total only for each argument.\"\n },\n {\n \"code\": null,\n \"e\": 27512,\n \"s\": 27414,\n \"text\": \"-x, – -one-file-system option : This will cause du to skip directories on different file systems.\"\n },\n {\n \"code\": null,\n \"e\": 27598,\n \"s\": 27512,\n \"text\": \"-X, – -exclude-from=FILE option : Exclude files that match any pattern given in FILE.\"\n },\n {\n \"code\": null,\n \"e\": 27668,\n \"s\": 27598,\n \"text\": \"– -exclude=PATTERN option : It will exclude files that match PATTERN.\"\n },\n {\n \"code\": null,\n \"e\": 27854,\n \"s\": 27668,\n \"text\": \"-d, – -max-depth=N option : Print the total for a directory (or file, with –all) only if it is N or fewer levels below the command line argument; –max-depth=0 is the same as –summarize.\"\n },\n {\n \"code\": null,\n \"e\": 27980,\n \"s\": 27854,\n \"text\": \"– -time option : This will show the time of the last modification of any file in the directory, or any of its subdirectories.\"\n },\n {\n \"code\": null,\n \"e\": 28093,\n \"s\": 27980,\n \"text\": \"– -time=WORD option : This shows time as WORD instead of modification time :atime, access, use, ctime or status.\"\n },\n {\n \"code\": null,\n \"e\": 28236,\n \"s\": 28093,\n \"text\": \"– -time-style=STYLE option : this shows time using STYLE: full-iso, long-iso, iso, or +FORMAT (FORMAT is interpreted like the format of date).\"\n },\n {\n \"code\": null,\n \"e\": 28296,\n \"s\": 28236,\n \"text\": \"– -help option : This will display a help message and exit.\"\n },\n {\n \"code\": null,\n \"e\": 28357,\n \"s\": 28296,\n \"text\": \"– -version option : This will display version info and exit.\"\n },\n {\n \"code\": null,\n \"e\": 28386,\n \"s\": 28357,\n \"text\": \"Examples of using du command\"\n },\n {\n \"code\": null,\n \"e\": 28493,\n \"s\": 28386,\n \"text\": \"1. Using -h option : As mentioned above, -h option is used to produce the output in human readable format.\"\n },\n {\n \"code\": null,\n \"e\": 28631,\n \"s\": 28493,\n \"text\": \"//using -h with du\\n\\n$du -h kt.txt pt.txt\\n8.0K kt.txt\\n4.0K pt.txt\\n\\n/*now the output\\nis in human readable\\nformat i.e in\\nKilobytes */\\n\"\n },\n {\n \"code\": null,\n \"e\": 28847,\n \"s\": 28631,\n \"text\": \"2. Using du to show disk usage of a directory : Now, if you will pass a directory name say kartik as an argument to du it will show the disk usage info of the input directory kartik and its sub-directories (if any).\"\n },\n {\n \"code\": null,\n \"e\": 28975,\n \"s\": 28847,\n \"text\": \"/*using du to display disk usage \\nof a directory and its\\nsub-directories */\\n\\n$du kartik\\n4 kartik/thakral\\n24 kartik\\n\\n\"\n },\n {\n \"code\": null,\n \"e\": 29069,\n \"s\": 28975,\n \"text\": \"Above the disk usage info of the directory kartik and its sub-directory thakral is displayed.\"\n },\n {\n \"code\": null,\n \"e\": 29328,\n \"s\": 29069,\n \"text\": \"3. Using -a option : now, as seen above only the disk usage info of directorykartik and its sub-directory thakral is displayed but what if you also want to know the disk usage info of all the files present under the directory kartik. For this, use -a option.\"\n },\n {\n \"code\": null,\n \"e\": 29634,\n \"s\": 29328,\n \"text\": \"//using -a with du\\n\\n$du -a kartik\\n8 kartik/kt.txt\\n4 kartik/pt.txt\\n4 kartik/pranjal.png\\n4 kartik/thakral.png\\n4 kartik/thakral\\n24 kartik\\n\\n/*so with -a option used\\nall the files (under directory\\nkartik) disk usage info is\\ndisplayed along with the \\nthakral sub-directory */\\n\"\n },\n {\n \"code\": null,\n \"e\": 29702,\n \"s\": 29634,\n \"text\": \"4. Using -c option : This option displays the grand total as shown.\"\n },\n {\n \"code\": null,\n \"e\": 29848,\n \"s\": 29702,\n \"text\": \"//using -c with du\\n\\n$du -c -h kt.txt pt.txt\\n8.0K kt.txt\\n4.0K pt.txt\\n12.0K total\\n\\n/* at the end\\ntotal is displayed \\nfor the disk usage */\\n\"\n },\n {\n \"code\": null,\n \"e\": 29953,\n \"s\": 29848,\n \"text\": \"5. Using – -time option : This option is used to display the last modification time in the output of du.\"\n },\n {\n \"code\": null,\n \"e\": 30122,\n \"s\": 29953,\n \"text\": \"//using --time with du\\n\\n$du --time kt.txt\\n4 2017-11-18 16:00 kt.txt\\n\\n/*so the last\\nmodification date and\\ntime gets displayed\\nwhen --time \\noption is used */\\n\"\n },\n {\n \"code\": null,\n \"e\": 30408,\n \"s\": 30122,\n \"text\": \"6. Using – -exclude=PATTERN option : In one of the example above, all the files disk usage related info was displayed of directory kartik. Now, suppose you want to know the info of .txt files only and not of .png files, in that case to exclude the .png pattern you can use this option.\"\n },\n {\n \"code\": null,\n \"e\": 30624,\n \"s\": 30408,\n \"text\": \"//using --exclude=PATTERN with du\\n\\n$du --exclude=*.png -a kartik\\n8 kartik/kt.txt\\n4 kartik/pt.txt\\n4 kartik/thakral\\n24 kartik\\n\\n/*so, in this case\\n.png files info are\\nexcluded from the output */\\n\"\n },\n {\n \"code\": null,\n \"e\": 31018,\n \"s\": 30624,\n \"text\": \"7. Using – -max-depth=N option : Now, this option allows you to limit the output of du to a particular depth of a directory.Suppose you have a directory named FRIENDS under which you have sub-directories as FRIENDS/college and FRIENDS/school and also under sub-directory college you have another sub-directory as FRIENDS/college/farewell then you can use – -max-depth=N option in this case as:\"\n },\n {\n \"code\": null,\n \"e\": 31168,\n \"s\": 31018,\n \"text\": \"//using --max-depth=N with du\\n\\n$du --max-depth=0 FRIENDS\\n24 FRIENDS\\n\\n\\n/* in this case you \\nrestricted du output\\nonly to top=level\\ndirectory */\\n\"\n },\n {\n \"code\": null,\n \"e\": 31226,\n \"s\": 31168,\n \"text\": \"Now, for sub-directories college and school you can use :\"\n },\n {\n \"code\": null,\n \"e\": 31317,\n \"s\": 31226,\n \"text\": \"$du --max-depth=1 FRIENDS\\n16 FRIENDS/college\\n8 FRIENDS/school\\n24 FRIENDS\\n\\n\"\n },\n {\n \"code\": null,\n \"e\": 31380,\n \"s\": 31317,\n \"text\": \"Now, for FRIENDS/college/farewell you can use –max-depth=2 as:\"\n },\n {\n \"code\": null,\n \"e\": 31563,\n \"s\": 31380,\n \"text\": \"$du --max-depth=2 FRIENDS\\n4 FRIENDS/college/farewell\\n16 FRIENDS/college\\n8 FRIENDS/school\\n24 FRIENDS\\n\\n/*so this is how N\\nin --max-depth=N \\nis used for levels */\\n\"\n },\n {\n \"code\": null,\n \"e\": 31831,\n \"s\": 31563,\n \"text\": \"8. Using – -files0-from=F option : As mentioned above, this is used to summarize disk usage of the NUL-terminated file names specified in the file F and if the file F is “-” then read names from the standard input.Let’s use this option for taking input from STDIN as:\"\n },\n {\n \"code\": null,\n \"e\": 32200,\n \"s\": 31831,\n \"text\": \"//using --files0from=F with du\\n\\n$pwd\\n/home/kartik\\n\\n$ls\\nkt.txt pt.txt thakral\\n\\n/*now use this option for \\ntaking input from\\nSTDIN */\\n\\n$du --files0-from=-\\nkt.txt8 kt.txt\\npt.txt4 pt.txt\\n\\n/* in this case after \\ngiving kt.txt as a input\\nfrom STDIN there is need to\\npress Ctrl+D twice then the\\noutput is shown and same for\\npt.txt or any other file name\\ngiven from STDIN */\\n\\n\"\n },\n {\n \"code\": null,\n \"e\": 32227,\n \"s\": 32200,\n \"text\": \"Applications of du command\"\n },\n {\n \"code\": null,\n \"e\": 32364,\n \"s\": 32227,\n \"text\": \"It can be used to find out the disk space occupied by a particular directory in case of transferring files from one computer to another.\"\n },\n {\n \"code\": null,\n \"e\": 32499,\n \"s\": 32364,\n \"text\": \"du command can be linked with pipes to filters.A filter is usually a specialized program that transforms the data in a meaningful way.\"\n },\n {\n \"code\": null,\n \"e\": 32661,\n \"s\": 32499,\n \"text\": \"There also exists some other ways like df command to find the disk usage but they all lack du ability to show the disk usage of individual directories and files.\"\n },\n {\n \"code\": null,\n \"e\": 32755,\n \"s\": 32661,\n \"text\": \"It can also be used to find out quickly the number of sub-directories present in a directory.\"\n },\n {\n \"code\": null,\n \"e\": 32788,\n \"s\": 32755,\n \"text\": \"Example of using du with filters\"\n },\n {\n \"code\": null,\n \"e\": 32937,\n \"s\": 32788,\n \"text\": \"Let’s take a simple example of using du with sort command so that the output produced by du will be sorted in the increasing order of size of files.\"\n },\n {\n \"code\": null,\n \"e\": 33358,\n \"s\": 32937,\n \"text\": \"\\n$du -a kartik\\n8 kartik/kt.txt\\n4 kartik/pt.txt\\n4 kartik/pranjal.png\\n4 kartik/thakral.png\\n4 kartik/thakral\\n24 kartik\\n\\n/*now using du to produce\\nsorted output */\\n\\n$du -a kartik | sort -n\\n4 kartik/pt.txt\\n4 kartik/pranjal.png\\n4 kartik/thakral.png\\n4 kartik/thakral\\n8 kartik/kt.txt\\n24 kartik\\n\\n/* now the output displayed\\nis sorted according to the size */\\n\"\n },\n {\n \"code\": null,\n \"e\": 33569,\n \"s\": 33358,\n \"text\": \"The sort command along with -n option used causes to list the output in numeric order with the file with the smallest size appearing first.In this way du can be used to arrange the output according to the size.\"\n },\n {\n \"code\": null,\n \"e\": 33598,\n \"s\": 33569,\n \"text\": \"That’s all about du command.\"\n },\n {\n \"code\": null,\n \"e\": 33612,\n \"s\": 33598,\n \"text\": \"linux-command\"\n },\n {\n \"code\": null,\n \"e\": 33632,\n \"s\": 33612,\n \"text\": \"Linux-file-commands\"\n },\n {\n \"code\": null,\n \"e\": 33643,\n \"s\": 33632,\n \"text\": \"Linux-Unix\"\n },\n {\n \"code\": null,\n \"e\": 33741,\n \"s\": 33643,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 33750,\n \"s\": 33741,\n \"text\": \"Comments\"\n },\n {\n \"code\": null,\n \"e\": 33763,\n \"s\": 33750,\n \"text\": \"Old Comments\"\n },\n {\n \"code\": null,\n \"e\": 33801,\n \"s\": 33763,\n \"text\": \"TCP Server-Client implementation in C\"\n },\n {\n \"code\": null,\n \"e\": 33836,\n \"s\": 33801,\n \"text\": \"ZIP command in Linux with examples\"\n },\n {\n \"code\": null,\n \"e\": 33874,\n \"s\": 33836,\n \"text\": \"Conditional Statements | Shell Script\"\n },\n {\n \"code\": null,\n \"e\": 33909,\n \"s\": 33874,\n \"text\": \"tar command in Linux with examples\"\n },\n {\n \"code\": null,\n \"e\": 33947,\n \"s\": 33909,\n \"text\": \"UDP Server-Client implementation in C\"\n },\n {\n \"code\": null,\n \"e\": 33983,\n \"s\": 33947,\n \"text\": \"curl command in Linux with Examples\"\n },\n {\n \"code\": null,\n \"e\": 34018,\n \"s\": 33983,\n \"text\": \"Cat command in Linux with examples\"\n },\n {\n \"code\": null,\n \"e\": 34054,\n \"s\": 34018,\n \"text\": \"echo command in Linux with Examples\"\n },\n {\n \"code\": null,\n \"e\": 34098,\n \"s\": 34054,\n \"text\": \"Mutex lock for Linux Thread Synchronization\"\n }\n]"}}},{"rowIdx":518,"cells":{"title":{"kind":"string","value":"AWT TextEvent Class"},"text":{"kind":"string","value":"The object of this class represents the text events.The TextEvent is generated when character is entered in the text fields or text area. The TextEvent instance does not include the characters currently in the text component that generated the event rather we are provided with other methods to retrieve that information.\nFollowing is the declaration for java.awt.event.TextEvent class:\npublic class TextEvent\n extends AWTEvent\nFollowing are the fields for java.awt.event.TextEvent class:\nstatic int TEXT_FIRST --The first number in the range of ids used for text events.\nstatic int TEXT_FIRST --The first number in the range of ids used for text events.\nstatic int TEXT_LAST --The last number in the range of ids used for text events.\nstatic int TEXT_LAST --The last number in the range of ids used for text events.\nstatic int TEXT_VALUE_CHANGED --This event id indicates that object's text changed.\nstatic int TEXT_VALUE_CHANGED --This event id indicates that object's text changed.\nTextEvent(Object source, int id) \nConstructs a TextEvent object.\nString\tparamString()\n Returns a parameter string identifying this text event.\nThis class inherits methods from the following classes:\njava.awt.AWTEvent\njava.awt.AWTEvent\njava.util.EventObject\njava.util.EventObject\njava.lang.Object\njava.lang.Object\n\n 13 Lectures \n 2 hours \n\n EduOLC\n Print\n Add Notes\n Bookmark this page"},"parsed":{"kind":"list like","value":[{"code":null,"e":2070,"s":1747,"text":"The object of this class represents the text events.The TextEvent is generated when character is entered in the text fields or text area. The TextEvent instance does not include the characters currently in the text component that generated the event rather we are provided with other methods to retrieve that information."},{"code":null,"e":2135,"s":2070,"text":"Following is the declaration for java.awt.event.TextEvent class:"},{"code":null,"e":2178,"s":2135,"text":"public class TextEvent\n extends AWTEvent"},{"code":null,"e":2239,"s":2178,"text":"Following are the fields for java.awt.event.TextEvent class:"},{"code":null,"e":2323,"s":2239,"text":"static int TEXT_FIRST --The first number in the range of ids used for text events."},{"code":null,"e":2407,"s":2323,"text":"static int TEXT_FIRST --The first number in the range of ids used for text events."},{"code":null,"e":2489,"s":2407,"text":"static int TEXT_LAST --The last number in the range of ids used for text events."},{"code":null,"e":2571,"s":2489,"text":"static int TEXT_LAST --The last number in the range of ids used for text events."},{"code":null,"e":2656,"s":2571,"text":"static int TEXT_VALUE_CHANGED --This event id indicates that object's text changed."},{"code":null,"e":2741,"s":2656,"text":"static int TEXT_VALUE_CHANGED --This event id indicates that object's text changed."},{"code":null,"e":2775,"s":2741,"text":"TextEvent(Object source, int id) "},{"code":null,"e":2806,"s":2775,"text":"Constructs a TextEvent object."},{"code":null,"e":2827,"s":2806,"text":"String\tparamString()"},{"code":null,"e":2884,"s":2827,"text":" Returns a parameter string identifying this text event."},{"code":null,"e":2940,"s":2884,"text":"This class inherits methods from the following classes:"},{"code":null,"e":2958,"s":2940,"text":"java.awt.AWTEvent"},{"code":null,"e":2976,"s":2958,"text":"java.awt.AWTEvent"},{"code":null,"e":2998,"s":2976,"text":"java.util.EventObject"},{"code":null,"e":3020,"s":2998,"text":"java.util.EventObject"},{"code":null,"e":3037,"s":3020,"text":"java.lang.Object"},{"code":null,"e":3054,"s":3037,"text":"java.lang.Object"},{"code":null,"e":3087,"s":3054,"text":"\n 13 Lectures \n 2 hours \n"},{"code":null,"e":3095,"s":3087,"text":" EduOLC"},{"code":null,"e":3102,"s":3095,"text":" Print"},{"code":null,"e":3113,"s":3102,"text":" Add Notes"}],"string":"[\n {\n \"code\": null,\n \"e\": 2070,\n \"s\": 1747,\n \"text\": \"The object of this class represents the text events.The TextEvent is generated when character is entered in the text fields or text area. The TextEvent instance does not include the characters currently in the text component that generated the event rather we are provided with other methods to retrieve that information.\"\n },\n {\n \"code\": null,\n \"e\": 2135,\n \"s\": 2070,\n \"text\": \"Following is the declaration for java.awt.event.TextEvent class:\"\n },\n {\n \"code\": null,\n \"e\": 2178,\n \"s\": 2135,\n \"text\": \"public class TextEvent\\n extends AWTEvent\"\n },\n {\n \"code\": null,\n \"e\": 2239,\n \"s\": 2178,\n \"text\": \"Following are the fields for java.awt.event.TextEvent class:\"\n },\n {\n \"code\": null,\n \"e\": 2323,\n \"s\": 2239,\n \"text\": \"static int TEXT_FIRST --The first number in the range of ids used for text events.\"\n },\n {\n \"code\": null,\n \"e\": 2407,\n \"s\": 2323,\n \"text\": \"static int TEXT_FIRST --The first number in the range of ids used for text events.\"\n },\n {\n \"code\": null,\n \"e\": 2489,\n \"s\": 2407,\n \"text\": \"static int TEXT_LAST --The last number in the range of ids used for text events.\"\n },\n {\n \"code\": null,\n \"e\": 2571,\n \"s\": 2489,\n \"text\": \"static int TEXT_LAST --The last number in the range of ids used for text events.\"\n },\n {\n \"code\": null,\n \"e\": 2656,\n \"s\": 2571,\n \"text\": \"static int TEXT_VALUE_CHANGED --This event id indicates that object's text changed.\"\n },\n {\n \"code\": null,\n \"e\": 2741,\n \"s\": 2656,\n \"text\": \"static int TEXT_VALUE_CHANGED --This event id indicates that object's text changed.\"\n },\n {\n \"code\": null,\n \"e\": 2775,\n \"s\": 2741,\n \"text\": \"TextEvent(Object source, int id) \"\n },\n {\n \"code\": null,\n \"e\": 2806,\n \"s\": 2775,\n \"text\": \"Constructs a TextEvent object.\"\n },\n {\n \"code\": null,\n \"e\": 2827,\n \"s\": 2806,\n \"text\": \"String\\tparamString()\"\n },\n {\n \"code\": null,\n \"e\": 2884,\n \"s\": 2827,\n \"text\": \" Returns a parameter string identifying this text event.\"\n },\n {\n \"code\": null,\n \"e\": 2940,\n \"s\": 2884,\n \"text\": \"This class inherits methods from the following classes:\"\n },\n {\n \"code\": null,\n \"e\": 2958,\n \"s\": 2940,\n \"text\": \"java.awt.AWTEvent\"\n },\n {\n \"code\": null,\n \"e\": 2976,\n \"s\": 2958,\n \"text\": \"java.awt.AWTEvent\"\n },\n {\n \"code\": null,\n \"e\": 2998,\n \"s\": 2976,\n \"text\": \"java.util.EventObject\"\n },\n {\n \"code\": null,\n \"e\": 3020,\n \"s\": 2998,\n \"text\": \"java.util.EventObject\"\n },\n {\n \"code\": null,\n \"e\": 3037,\n \"s\": 3020,\n \"text\": \"java.lang.Object\"\n },\n {\n \"code\": null,\n \"e\": 3054,\n \"s\": 3037,\n \"text\": \"java.lang.Object\"\n },\n {\n \"code\": null,\n \"e\": 3087,\n \"s\": 3054,\n \"text\": \"\\n 13 Lectures \\n 2 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 3095,\n \"s\": 3087,\n \"text\": \" EduOLC\"\n },\n {\n \"code\": null,\n \"e\": 3102,\n \"s\": 3095,\n \"text\": \" Print\"\n },\n {\n \"code\": null,\n \"e\": 3113,\n \"s\": 3102,\n \"text\": \" Add Notes\"\n }\n]"}}},{"rowIdx":519,"cells":{"title":{"kind":"string","value":"List assign() function in C++ STL"},"text":{"kind":"string","value":"Given is th e task to show the working of the assign() function in C++.\nThe list::assign() function is a part of the C++ standard template library. It is used to assign the values to a list and also to copy values from one list to another.\n header file should be included to call this function.\nThe syntax for assigning new values is as follows −\nList_Name.assign(size,value)\nThe syntax for copying values from one list to another is as follows −\nFirst_List.assign(Second_List.begin(),Second_list.end())\nThe function takes two parameters −\nFirst is size, that represents the size of the list and the second one is value, which represents the data value to be stored inside the list.\nThe function has no return value.\nInput: Lt.assign(3,10)\nOutput: The size of list Lt is 3.\nThe elements of the list Lt are 10 10 10.\nExplanation −\nThe following example shows how we can assign a list its size and values by using the assign() function. The first value that we will pass inside the list function becomes the size of the list, in this case it is 3 and the second element is the value that is assigned to each position of the list and here it is 10.\nInput: int array[5] = { 1, 2, 3, 4 }\nLt.assign(array,array+3)\nOutput: The size of list Lt is 3.\nThe elements of the list Lt are 1 2 3.\nExplanation −\nThe following example shows how we can assign values to a list using an array. The total number of elements that we will assign to the list becomes the size of the list.\nThe user has to simply pass the name of the array as the first argument inside the assign() function, and the second argument should be the name of the array, then a “+” sign followed by the number of elements the user wants to assign to the list.\nIn the above case we have written 3, so the first three elements of the array will be assigned to the list.\nIf we write a number that is bigger than the number of elements present in the array, let us say 6, then the program will not show any error instead the size of the list will become 6 and the extra positions in the list will be assigned with the value zero.\nApproach used in the below program as follows −\nFirst create a function ShowList(list L) that will display the elements of the list.\nCreate an iterator, let’s say itr that will contain the initial element of the list to be displayed.\nMake the loop run till itr reaches the final element of the list.\nThen inside the main() function create three lists using list let’s say L1, L2 ad L3 so that they accept values of type int and then create an array of type int, let’s say arr[] and assign it some values.\nThen use the assign() function to assign size and some values to the list L1 and then pass the list L1 into the ShowDisplay() function.\nThen use the assign() function to copy elements of list L1 into L2 and also pass the list L2 into the ShowList() function.\nThen use the assign() function to copy elements of the array arr[] into the list L3 and pass the list L3 into the DisplayList() function.\nStart\nStep 1-> Declare function DisplayList(list L) for showing list elements\n Declare iterator itr\n Loop For itr=L.begin() and itr!=L.end() and itr++\n Print *itr\n End\nStep 2-> In function main()\n Declare lists L1,L2,L3\n Initialize array arr[]\n Call L1.assign(size,value)\n Print L1.size();\n Call function DisplayList(L1) to display L1\n Call L2.assign(L1.begin(),L1.end())\n Print L2.size();\n Call function DisplayList(L2) to display L2\n Call L3.assign(arr,arr+4)\n Print L3.size();\n Call function DisplayList(L3) to display L3\nStop\n Live Demo\n#include\n#include\nusing namespace std;\nint ShowList(list L) {\n cout<<\"The elements of the list are \";\n list::iterator itr;\n for(itr=L.begin(); itr!=L.end(); itr++) {\n cout<<*itr<<\" \";\n }\n cout<<\"\\n\";\n}\nint main() {\n list L1;\n list L2;\n list L3;\n int arr[10] = { 6, 7, 2, 4 };\n //assigning size and values to list L1\n L1.assign(3,20);\n cout<<\"The size of list L1 is \"< header file should be included to call this function."},{"code":null,"e":1415,"s":1363,"text":"The syntax for assigning new values is as follows −"},{"code":null,"e":1444,"s":1415,"text":"List_Name.assign(size,value)"},{"code":null,"e":1515,"s":1444,"text":"The syntax for copying values from one list to another is as follows −"},{"code":null,"e":1572,"s":1515,"text":"First_List.assign(Second_List.begin(),Second_list.end())"},{"code":null,"e":1608,"s":1572,"text":"The function takes two parameters −"},{"code":null,"e":1751,"s":1608,"text":"First is size, that represents the size of the list and the second one is value, which represents the data value to be stored inside the list."},{"code":null,"e":1785,"s":1751,"text":"The function has no return value."},{"code":null,"e":1884,"s":1785,"text":"Input: Lt.assign(3,10)\nOutput: The size of list Lt is 3.\nThe elements of the list Lt are 10 10 10."},{"code":null,"e":1898,"s":1884,"text":"Explanation −"},{"code":null,"e":2214,"s":1898,"text":"The following example shows how we can assign a list its size and values by using the assign() function. The first value that we will pass inside the list function becomes the size of the list, in this case it is 3 and the second element is the value that is assigned to each position of the list and here it is 10."},{"code":null,"e":2349,"s":2214,"text":"Input: int array[5] = { 1, 2, 3, 4 }\nLt.assign(array,array+3)\nOutput: The size of list Lt is 3.\nThe elements of the list Lt are 1 2 3."},{"code":null,"e":2363,"s":2349,"text":"Explanation −"},{"code":null,"e":2533,"s":2363,"text":"The following example shows how we can assign values to a list using an array. The total number of elements that we will assign to the list becomes the size of the list."},{"code":null,"e":2781,"s":2533,"text":"The user has to simply pass the name of the array as the first argument inside the assign() function, and the second argument should be the name of the array, then a “+” sign followed by the number of elements the user wants to assign to the list."},{"code":null,"e":2889,"s":2781,"text":"In the above case we have written 3, so the first three elements of the array will be assigned to the list."},{"code":null,"e":3147,"s":2889,"text":"If we write a number that is bigger than the number of elements present in the array, let us say 6, then the program will not show any error instead the size of the list will become 6 and the extra positions in the list will be assigned with the value zero."},{"code":null,"e":3195,"s":3147,"text":"Approach used in the below program as follows −"},{"code":null,"e":3285,"s":3195,"text":"First create a function ShowList(list L) that will display the elements of the list."},{"code":null,"e":3386,"s":3285,"text":"Create an iterator, let’s say itr that will contain the initial element of the list to be displayed."},{"code":null,"e":3452,"s":3386,"text":"Make the loop run till itr reaches the final element of the list."},{"code":null,"e":3662,"s":3452,"text":"Then inside the main() function create three lists using list let’s say L1, L2 ad L3 so that they accept values of type int and then create an array of type int, let’s say arr[] and assign it some values."},{"code":null,"e":3798,"s":3662,"text":"Then use the assign() function to assign size and some values to the list L1 and then pass the list L1 into the ShowDisplay() function."},{"code":null,"e":3921,"s":3798,"text":"Then use the assign() function to copy elements of list L1 into L2 and also pass the list L2 into the ShowList() function."},{"code":null,"e":4059,"s":3921,"text":"Then use the assign() function to copy elements of the array arr[] into the list L3 and pass the list L3 into the DisplayList() function."},{"code":null,"e":4624,"s":4059,"text":"Start\nStep 1-> Declare function DisplayList(list L) for showing list elements\n Declare iterator itr\n Loop For itr=L.begin() and itr!=L.end() and itr++\n Print *itr\n End\nStep 2-> In function main()\n Declare lists L1,L2,L3\n Initialize array arr[]\n Call L1.assign(size,value)\n Print L1.size();\n Call function DisplayList(L1) to display L1\n Call L2.assign(L1.begin(),L1.end())\n Print L2.size();\n Call function DisplayList(L2) to display L2\n Call L3.assign(arr,arr+4)\n Print L3.size();\n Call function DisplayList(L3) to display L3\nStop"},{"code":null,"e":4635,"s":4624,"text":" Live Demo"},{"code":null,"e":5410,"s":4635,"text":"#include\n#include\nusing namespace std;\nint ShowList(list L) {\n cout<<\"The elements of the list are \";\n list::iterator itr;\n for(itr=L.begin(); itr!=L.end(); itr++) {\n cout<<*itr<<\" \";\n }\n cout<<\"\\n\";\n}\nint main() {\n list L1;\n list L2;\n list L3;\n int arr[10] = { 6, 7, 2, 4 };\n //assigning size and values to list L1\n L1.assign(3,20);\n cout<<\"The size of list L1 is \"< header file should be included to call this function.\"\n },\n {\n \"code\": null,\n \"e\": 1415,\n \"s\": 1363,\n \"text\": \"The syntax for assigning new values is as follows −\"\n },\n {\n \"code\": null,\n \"e\": 1444,\n \"s\": 1415,\n \"text\": \"List_Name.assign(size,value)\"\n },\n {\n \"code\": null,\n \"e\": 1515,\n \"s\": 1444,\n \"text\": \"The syntax for copying values from one list to another is as follows −\"\n },\n {\n \"code\": null,\n \"e\": 1572,\n \"s\": 1515,\n \"text\": \"First_List.assign(Second_List.begin(),Second_list.end())\"\n },\n {\n \"code\": null,\n \"e\": 1608,\n \"s\": 1572,\n \"text\": \"The function takes two parameters −\"\n },\n {\n \"code\": null,\n \"e\": 1751,\n \"s\": 1608,\n \"text\": \"First is size, that represents the size of the list and the second one is value, which represents the data value to be stored inside the list.\"\n },\n {\n \"code\": null,\n \"e\": 1785,\n \"s\": 1751,\n \"text\": \"The function has no return value.\"\n },\n {\n \"code\": null,\n \"e\": 1884,\n \"s\": 1785,\n \"text\": \"Input: Lt.assign(3,10)\\nOutput: The size of list Lt is 3.\\nThe elements of the list Lt are 10 10 10.\"\n },\n {\n \"code\": null,\n \"e\": 1898,\n \"s\": 1884,\n \"text\": \"Explanation −\"\n },\n {\n \"code\": null,\n \"e\": 2214,\n \"s\": 1898,\n \"text\": \"The following example shows how we can assign a list its size and values by using the assign() function. The first value that we will pass inside the list function becomes the size of the list, in this case it is 3 and the second element is the value that is assigned to each position of the list and here it is 10.\"\n },\n {\n \"code\": null,\n \"e\": 2349,\n \"s\": 2214,\n \"text\": \"Input: int array[5] = { 1, 2, 3, 4 }\\nLt.assign(array,array+3)\\nOutput: The size of list Lt is 3.\\nThe elements of the list Lt are 1 2 3.\"\n },\n {\n \"code\": null,\n \"e\": 2363,\n \"s\": 2349,\n \"text\": \"Explanation −\"\n },\n {\n \"code\": null,\n \"e\": 2533,\n \"s\": 2363,\n \"text\": \"The following example shows how we can assign values to a list using an array. The total number of elements that we will assign to the list becomes the size of the list.\"\n },\n {\n \"code\": null,\n \"e\": 2781,\n \"s\": 2533,\n \"text\": \"The user has to simply pass the name of the array as the first argument inside the assign() function, and the second argument should be the name of the array, then a “+” sign followed by the number of elements the user wants to assign to the list.\"\n },\n {\n \"code\": null,\n \"e\": 2889,\n \"s\": 2781,\n \"text\": \"In the above case we have written 3, so the first three elements of the array will be assigned to the list.\"\n },\n {\n \"code\": null,\n \"e\": 3147,\n \"s\": 2889,\n \"text\": \"If we write a number that is bigger than the number of elements present in the array, let us say 6, then the program will not show any error instead the size of the list will become 6 and the extra positions in the list will be assigned with the value zero.\"\n },\n {\n \"code\": null,\n \"e\": 3195,\n \"s\": 3147,\n \"text\": \"Approach used in the below program as follows −\"\n },\n {\n \"code\": null,\n \"e\": 3285,\n \"s\": 3195,\n \"text\": \"First create a function ShowList(list L) that will display the elements of the list.\"\n },\n {\n \"code\": null,\n \"e\": 3386,\n \"s\": 3285,\n \"text\": \"Create an iterator, let’s say itr that will contain the initial element of the list to be displayed.\"\n },\n {\n \"code\": null,\n \"e\": 3452,\n \"s\": 3386,\n \"text\": \"Make the loop run till itr reaches the final element of the list.\"\n },\n {\n \"code\": null,\n \"e\": 3662,\n \"s\": 3452,\n \"text\": \"Then inside the main() function create three lists using list let’s say L1, L2 ad L3 so that they accept values of type int and then create an array of type int, let’s say arr[] and assign it some values.\"\n },\n {\n \"code\": null,\n \"e\": 3798,\n \"s\": 3662,\n \"text\": \"Then use the assign() function to assign size and some values to the list L1 and then pass the list L1 into the ShowDisplay() function.\"\n },\n {\n \"code\": null,\n \"e\": 3921,\n \"s\": 3798,\n \"text\": \"Then use the assign() function to copy elements of list L1 into L2 and also pass the list L2 into the ShowList() function.\"\n },\n {\n \"code\": null,\n \"e\": 4059,\n \"s\": 3921,\n \"text\": \"Then use the assign() function to copy elements of the array arr[] into the list L3 and pass the list L3 into the DisplayList() function.\"\n },\n {\n \"code\": null,\n \"e\": 4624,\n \"s\": 4059,\n \"text\": \"Start\\nStep 1-> Declare function DisplayList(list L) for showing list elements\\n Declare iterator itr\\n Loop For itr=L.begin() and itr!=L.end() and itr++\\n Print *itr\\n End\\nStep 2-> In function main()\\n Declare lists L1,L2,L3\\n Initialize array arr[]\\n Call L1.assign(size,value)\\n Print L1.size();\\n Call function DisplayList(L1) to display L1\\n Call L2.assign(L1.begin(),L1.end())\\n Print L2.size();\\n Call function DisplayList(L2) to display L2\\n Call L3.assign(arr,arr+4)\\n Print L3.size();\\n Call function DisplayList(L3) to display L3\\nStop\"\n },\n {\n \"code\": null,\n \"e\": 4635,\n \"s\": 4624,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 5410,\n \"s\": 4635,\n \"text\": \"#include\\n#include\\nusing namespace std;\\nint ShowList(list L) {\\n cout<<\\\"The elements of the list are \\\";\\n list::iterator itr;\\n for(itr=L.begin(); itr!=L.end(); itr++) {\\n cout<<*itr<<\\\" \\\";\\n }\\n cout<<\\\"\\\\n\\\";\\n}\\nint main() {\\n list L1;\\n list L2;\\n list L3;\\n int arr[10] = { 6, 7, 2, 4 };\\n //assigning size and values to list L1\\n L1.assign(3,20);\\n cout<<\\\"The size of list L1 is \\\"<> ~/.bashrc\nExpose the port on which jupyter notebook will listen and run it. Note that in this example I am running a notebook with no authentication which is only for illustrative purposes. You should always turn on proper authentication.\nEXPOSE 8888 ENTRYPOINT [\"jupyter\", \"notebook\", \"--no-browser\",\"--ip=0.0.0.0\",\"--NotebookApp.token=''\",\"--NotebookApp.password=''\"]\nFinally I find it very useful to put all my microservices behind the reverse proxy traefik. This has the additional advantage of being able to turn on SSL on all services without individual configuration. In this toy example there is only one microservice but when we have many it is useful to turn them on with prefixes and here is why I used an entry point instead of command in the dockerfile above. I can now complete the command by asking jupyter to start the notebook server while listening on a different path where traefik will redirect the traffic\ncommand: \"--NotebookApp.base_url='multiple_conda_environments'\"\nNow the notebook can be accessed on https://myserver.com /multiple_conda_environment where you should replace myserver.com with your hostname.\nNow when you run try to create a new notebook you should see a choice between the standard python3 and two additional kernels\nIf you go to the actual repository with the full code to run both tensorflow and pytorch you will find that I appropriated the python3 kernel to convert it into a tensorflow kernel and have changed its name to reflect that.\nThis is a better solution if you do not care about the extra default kernel floating around that is not going to be used.\nHaving your jupyter server run as a container is a must for every data scientist as it allows one to seamlessly move their lab, as it were, from one cloud to another.\nBeing able to have multiple kernels in jupyter is likewise a must for every data scientist as well as it allows one to work on multiple projects with mutually exclusive package dependencies.\nDue to the way standard conda environments and their jupyter kernelspecs are installed, making the two play with each other is not straightforward. In this tutorial and the associated github repository I have explained how to make containerized jupyter installations that support multiple kernels."},"parsed":{"kind":"list like","value":[{"code":null,"e":585,"s":172,"text":"Docker and Docker-Compose are great utilities that support the microservice paradigm by allowing efficient containerization. Within the python ecosystem the package manager Conda also allows some kind of containerization that is limited to python packages. Conda environments are especially handy for data scientists working in jupyter notebooks that have different (and mutually exclusive) package dependencies."},{"code":null,"e":975,"s":585,"text":"However, due to the peculiar way in which conda environments are setup, getting them working out of the box in Docker, as it were, is not so straightforward. Furthermore, adding kernelspecs for these environments to jupyter is another very useful but complicated step. This article will clearly and concisely explain how to setup Dockerized containers with Jupyter having multiple kernels."},{"code":null,"e":1159,"s":975,"text":"To gain from this article you should already know the basics of Docker, Conda and Jupyter. If you don’t and would like to there are excellent tutorials on all three on their websites."},{"code":null,"e":1422,"s":1159,"text":"I like Docker in the context of data science and machine learning research as it is very easy for me to containerize my whole research setup and move it to the various cloud services that I use (my laptop, my desktop, GCP, a barebones cloud we maintain and AWS)."},{"code":null,"e":1673,"s":1422,"text":"Requiring multiple conda environments and associated jupyter kernels is something one often needs in Data Science and Machine Learning. For instance when dealing with Python 2 and Python 3 code or when transitioning from Tensorflow 1 to Tensorflow 2."},{"code":null,"e":2033,"s":1673,"text":"One such issue came up recently for me. I work with TensorFlow and PyTorch both for deeplearning and till now I had them both installed in the same conda environment in my docker image. However, turns out that installing tensorflow 2.x breaks tensorboard for pytorch and the “solution” seems to be to not install tensorflow in the same environment as pytorch."},{"code":null,"e":2148,"s":2033,"text":"To reproduce it just try running the tutorial notebook mentioned in the image above after installing tensorflow 2."},{"code":null,"e":2205,"s":2148,"text":"So, in this case the solution is either of the following"},{"code":null,"e":2352,"s":2205,"text":"Two dockerized containers with one having tensorflow 2 and the other pytorch.One container with two environments that give two kernels in jupyter."},{"code":null,"e":2430,"s":2352,"text":"Two dockerized containers with one having tensorflow 2 and the other pytorch."},{"code":null,"e":2500,"s":2430,"text":"One container with two environments that give two kernels in jupyter."},{"code":null,"e":2704,"s":2500,"text":"The second one seems more elegant. Nevertheless, the standard way of creating a conda environment and activating it requires an interactive sessions and that is not possible when building a docker image."},{"code":null,"e":3186,"s":2704,"text":"In this article I quickly describe what needs to be done using a simple example and the actual code can be found in my repository. In fact my repository has the actual code that can be used to run the jupyter notebook mentioned above here but that has a lot of other steps that can distract away from the main topic of this article which is how to make Docker and Conda play well together. So I have also made another toy example here that just helps explain this particular point."},{"code":null,"e":3237,"s":3186,"text":"The docker file contains the following main steps:"},{"code":null,"e":3255,"s":3237,"text":"Start with Ubuntu"},{"code":null,"e":3286,"s":3255,"text":"Download and install miniconda"},{"code":null,"e":3473,"s":3286,"text":"RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.shRUN bash Miniconda3-latest-Linux-x86_64.sh -b -p /minicondaENV PATH=$PATH:/miniconda/condabin:/miniconda/bin"},{"code":null,"e":3513,"s":3473,"text":"Install jupyter in the base environment"},{"code":null,"e":3930,"s":3513,"text":"Create two more environments and add them to jupyter kernel list. This is a non-trivial step as one has to run ipykernel install in the new environment. Usually one would do that by doing conda init and then activate the new environment but this requires starting a new bash shell which we cannot do here. This is thus handled in a different way by running commands in the appropriate shells in the following manner."},{"code":null,"e":4166,"s":3930,"text":"RUN conda env create -f packages/environment_one.ymlSHELL [\"conda\",\"run\",\"-n\",\"one\",\"/bin/bash\",\"-c\"]RUN python -m ipykernel install --name kernel_one --display-name \"Display Name One\"RUN pip install -U -r packages/requirements_one.txt"},{"code":null,"e":4209,"s":4166,"text":"Add a new user and switch to her directory"},{"code":null,"e":4503,"s":4209,"text":"Perform conda init as well as make it so that by default a new bash session opens in a one of the newly created environments. This step is not necessary for the problem we described is worth noting in case we do want the shell to be conda friendly and launch into one of the extra environments"},{"code":null,"e":4584,"s":4503,"text":"SHELL [\"/bin/bash\",\"-c\"]RUN conda initRUN echo 'conda activate one' >> ~/.bashrc"},{"code":null,"e":4813,"s":4584,"text":"Expose the port on which jupyter notebook will listen and run it. Note that in this example I am running a notebook with no authentication which is only for illustrative purposes. You should always turn on proper authentication."},{"code":null,"e":4986,"s":4813,"text":"EXPOSE 8888 ENTRYPOINT [\"jupyter\", \"notebook\", \"--no-browser\",\"--ip=0.0.0.0\",\"--NotebookApp.token=''\",\"--NotebookApp.password=''\"]"},{"code":null,"e":5543,"s":4986,"text":"Finally I find it very useful to put all my microservices behind the reverse proxy traefik. This has the additional advantage of being able to turn on SSL on all services without individual configuration. In this toy example there is only one microservice but when we have many it is useful to turn them on with prefixes and here is why I used an entry point instead of command in the dockerfile above. I can now complete the command by asking jupyter to start the notebook server while listening on a different path where traefik will redirect the traffic"},{"code":null,"e":5607,"s":5543,"text":"command: \"--NotebookApp.base_url='multiple_conda_environments'\""},{"code":null,"e":5750,"s":5607,"text":"Now the notebook can be accessed on https://myserver.com /multiple_conda_environment where you should replace myserver.com with your hostname."},{"code":null,"e":5876,"s":5750,"text":"Now when you run try to create a new notebook you should see a choice between the standard python3 and two additional kernels"},{"code":null,"e":6100,"s":5876,"text":"If you go to the actual repository with the full code to run both tensorflow and pytorch you will find that I appropriated the python3 kernel to convert it into a tensorflow kernel and have changed its name to reflect that."},{"code":null,"e":6222,"s":6100,"text":"This is a better solution if you do not care about the extra default kernel floating around that is not going to be used."},{"code":null,"e":6389,"s":6222,"text":"Having your jupyter server run as a container is a must for every data scientist as it allows one to seamlessly move their lab, as it were, from one cloud to another."},{"code":null,"e":6580,"s":6389,"text":"Being able to have multiple kernels in jupyter is likewise a must for every data scientist as well as it allows one to work on multiple projects with mutually exclusive package dependencies."}],"string":"[\n {\n \"code\": null,\n \"e\": 585,\n \"s\": 172,\n \"text\": \"Docker and Docker-Compose are great utilities that support the microservice paradigm by allowing efficient containerization. Within the python ecosystem the package manager Conda also allows some kind of containerization that is limited to python packages. Conda environments are especially handy for data scientists working in jupyter notebooks that have different (and mutually exclusive) package dependencies.\"\n },\n {\n \"code\": null,\n \"e\": 975,\n \"s\": 585,\n \"text\": \"However, due to the peculiar way in which conda environments are setup, getting them working out of the box in Docker, as it were, is not so straightforward. Furthermore, adding kernelspecs for these environments to jupyter is another very useful but complicated step. This article will clearly and concisely explain how to setup Dockerized containers with Jupyter having multiple kernels.\"\n },\n {\n \"code\": null,\n \"e\": 1159,\n \"s\": 975,\n \"text\": \"To gain from this article you should already know the basics of Docker, Conda and Jupyter. If you don’t and would like to there are excellent tutorials on all three on their websites.\"\n },\n {\n \"code\": null,\n \"e\": 1422,\n \"s\": 1159,\n \"text\": \"I like Docker in the context of data science and machine learning research as it is very easy for me to containerize my whole research setup and move it to the various cloud services that I use (my laptop, my desktop, GCP, a barebones cloud we maintain and AWS).\"\n },\n {\n \"code\": null,\n \"e\": 1673,\n \"s\": 1422,\n \"text\": \"Requiring multiple conda environments and associated jupyter kernels is something one often needs in Data Science and Machine Learning. For instance when dealing with Python 2 and Python 3 code or when transitioning from Tensorflow 1 to Tensorflow 2.\"\n },\n {\n \"code\": null,\n \"e\": 2033,\n \"s\": 1673,\n \"text\": \"One such issue came up recently for me. I work with TensorFlow and PyTorch both for deeplearning and till now I had them both installed in the same conda environment in my docker image. However, turns out that installing tensorflow 2.x breaks tensorboard for pytorch and the “solution” seems to be to not install tensorflow in the same environment as pytorch.\"\n },\n {\n \"code\": null,\n \"e\": 2148,\n \"s\": 2033,\n \"text\": \"To reproduce it just try running the tutorial notebook mentioned in the image above after installing tensorflow 2.\"\n },\n {\n \"code\": null,\n \"e\": 2205,\n \"s\": 2148,\n \"text\": \"So, in this case the solution is either of the following\"\n },\n {\n \"code\": null,\n \"e\": 2352,\n \"s\": 2205,\n \"text\": \"Two dockerized containers with one having tensorflow 2 and the other pytorch.One container with two environments that give two kernels in jupyter.\"\n },\n {\n \"code\": null,\n \"e\": 2430,\n \"s\": 2352,\n \"text\": \"Two dockerized containers with one having tensorflow 2 and the other pytorch.\"\n },\n {\n \"code\": null,\n \"e\": 2500,\n \"s\": 2430,\n \"text\": \"One container with two environments that give two kernels in jupyter.\"\n },\n {\n \"code\": null,\n \"e\": 2704,\n \"s\": 2500,\n \"text\": \"The second one seems more elegant. Nevertheless, the standard way of creating a conda environment and activating it requires an interactive sessions and that is not possible when building a docker image.\"\n },\n {\n \"code\": null,\n \"e\": 3186,\n \"s\": 2704,\n \"text\": \"In this article I quickly describe what needs to be done using a simple example and the actual code can be found in my repository. In fact my repository has the actual code that can be used to run the jupyter notebook mentioned above here but that has a lot of other steps that can distract away from the main topic of this article which is how to make Docker and Conda play well together. So I have also made another toy example here that just helps explain this particular point.\"\n },\n {\n \"code\": null,\n \"e\": 3237,\n \"s\": 3186,\n \"text\": \"The docker file contains the following main steps:\"\n },\n {\n \"code\": null,\n \"e\": 3255,\n \"s\": 3237,\n \"text\": \"Start with Ubuntu\"\n },\n {\n \"code\": null,\n \"e\": 3286,\n \"s\": 3255,\n \"text\": \"Download and install miniconda\"\n },\n {\n \"code\": null,\n \"e\": 3473,\n \"s\": 3286,\n \"text\": \"RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.shRUN bash Miniconda3-latest-Linux-x86_64.sh -b -p /minicondaENV PATH=$PATH:/miniconda/condabin:/miniconda/bin\"\n },\n {\n \"code\": null,\n \"e\": 3513,\n \"s\": 3473,\n \"text\": \"Install jupyter in the base environment\"\n },\n {\n \"code\": null,\n \"e\": 3930,\n \"s\": 3513,\n \"text\": \"Create two more environments and add them to jupyter kernel list. This is a non-trivial step as one has to run ipykernel install in the new environment. Usually one would do that by doing conda init and then activate the new environment but this requires starting a new bash shell which we cannot do here. This is thus handled in a different way by running commands in the appropriate shells in the following manner.\"\n },\n {\n \"code\": null,\n \"e\": 4166,\n \"s\": 3930,\n \"text\": \"RUN conda env create -f packages/environment_one.ymlSHELL [\\\"conda\\\",\\\"run\\\",\\\"-n\\\",\\\"one\\\",\\\"/bin/bash\\\",\\\"-c\\\"]RUN python -m ipykernel install --name kernel_one --display-name \\\"Display Name One\\\"RUN pip install -U -r packages/requirements_one.txt\"\n },\n {\n \"code\": null,\n \"e\": 4209,\n \"s\": 4166,\n \"text\": \"Add a new user and switch to her directory\"\n },\n {\n \"code\": null,\n \"e\": 4503,\n \"s\": 4209,\n \"text\": \"Perform conda init as well as make it so that by default a new bash session opens in a one of the newly created environments. This step is not necessary for the problem we described is worth noting in case we do want the shell to be conda friendly and launch into one of the extra environments\"\n },\n {\n \"code\": null,\n \"e\": 4584,\n \"s\": 4503,\n \"text\": \"SHELL [\\\"/bin/bash\\\",\\\"-c\\\"]RUN conda initRUN echo 'conda activate one' >> ~/.bashrc\"\n },\n {\n \"code\": null,\n \"e\": 4813,\n \"s\": 4584,\n \"text\": \"Expose the port on which jupyter notebook will listen and run it. Note that in this example I am running a notebook with no authentication which is only for illustrative purposes. You should always turn on proper authentication.\"\n },\n {\n \"code\": null,\n \"e\": 4986,\n \"s\": 4813,\n \"text\": \"EXPOSE 8888 ENTRYPOINT [\\\"jupyter\\\", \\\"notebook\\\", \\\"--no-browser\\\",\\\"--ip=0.0.0.0\\\",\\\"--NotebookApp.token=''\\\",\\\"--NotebookApp.password=''\\\"]\"\n },\n {\n \"code\": null,\n \"e\": 5543,\n \"s\": 4986,\n \"text\": \"Finally I find it very useful to put all my microservices behind the reverse proxy traefik. This has the additional advantage of being able to turn on SSL on all services without individual configuration. In this toy example there is only one microservice but when we have many it is useful to turn them on with prefixes and here is why I used an entry point instead of command in the dockerfile above. I can now complete the command by asking jupyter to start the notebook server while listening on a different path where traefik will redirect the traffic\"\n },\n {\n \"code\": null,\n \"e\": 5607,\n \"s\": 5543,\n \"text\": \"command: \\\"--NotebookApp.base_url='multiple_conda_environments'\\\"\"\n },\n {\n \"code\": null,\n \"e\": 5750,\n \"s\": 5607,\n \"text\": \"Now the notebook can be accessed on https://myserver.com /multiple_conda_environment where you should replace myserver.com with your hostname.\"\n },\n {\n \"code\": null,\n \"e\": 5876,\n \"s\": 5750,\n \"text\": \"Now when you run try to create a new notebook you should see a choice between the standard python3 and two additional kernels\"\n },\n {\n \"code\": null,\n \"e\": 6100,\n \"s\": 5876,\n \"text\": \"If you go to the actual repository with the full code to run both tensorflow and pytorch you will find that I appropriated the python3 kernel to convert it into a tensorflow kernel and have changed its name to reflect that.\"\n },\n {\n \"code\": null,\n \"e\": 6222,\n \"s\": 6100,\n \"text\": \"This is a better solution if you do not care about the extra default kernel floating around that is not going to be used.\"\n },\n {\n \"code\": null,\n \"e\": 6389,\n \"s\": 6222,\n \"text\": \"Having your jupyter server run as a container is a must for every data scientist as it allows one to seamlessly move their lab, as it were, from one cloud to another.\"\n },\n {\n \"code\": null,\n \"e\": 6580,\n \"s\": 6389,\n \"text\": \"Being able to have multiple kernels in jupyter is likewise a must for every data scientist as well as it allows one to work on multiple projects with mutually exclusive package dependencies.\"\n }\n]"}}},{"rowIdx":522,"cells":{"title":{"kind":"string","value":"bin() in Python - GeeksforGeeks"},"text":{"kind":"string","value":"17 Sep, 2021\nPython bin() function returns the binary string of a given integer.\nSyntax: bin(a)\nParameters : a : an integer to convert\nReturn Value : A binary string of an integer or int object.\nExceptions : Raises TypeError when a float value is sent in arguments.\nPython3\n# Python code to demonstrate working of# bin() # declare variablenum = 100 # print binary numberprint(bin(num))\nOutput:\n0b1100100\nPython3\n# Python code to demonstrate working of# bin() # function returning binary stringdef Binary(n): s = bin(n) # removing \"0b\" prefix s1 = s[2:] return s1 print(\"The binary representation of 100 (using bin()) is : \", end=\"\")print(Binary(100))\nOutput: \nThe binary representation of 100 (using bin()) is : 1100100\nHere we send the object of the class to the bin methods, and we are using python special methods __index()__ method which always returns positive integer, and it can not be a rising error if the value is not an integer. \nPython3\n# Python code to demonstrate working of# bin()class number: num = 100 def __index__(self): return(self.num) print(bin(number()))\nOutput:\n0b1100100\nThis article is contributed by Manjeet Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.\nPlease write comments if you find anything incorrect, or you want to share more information about the topic discussed above. \nkumar_satyam\nbase-conversion\nPython-Built-in-functions\nPython\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nPython Dictionary\nRead a file line by line in Python\nEnumerate() in Python\nHow to Install PIP on Windows ?\nIterate over a list in Python\nDifferent ways to create Pandas Dataframe\nPython String | replace()\nCreate a Pandas DataFrame from Lists\nPython program to convert a list to string\nReading and Writing to text files in Python"},"parsed":{"kind":"list like","value":[{"code":null,"e":24678,"s":24650,"text":"\n17 Sep, 2021"},{"code":null,"e":24746,"s":24678,"text":"Python bin() function returns the binary string of a given integer."},{"code":null,"e":24762,"s":24746,"text":"Syntax: bin(a)"},{"code":null,"e":24801,"s":24762,"text":"Parameters : a : an integer to convert"},{"code":null,"e":24861,"s":24801,"text":"Return Value : A binary string of an integer or int object."},{"code":null,"e":24932,"s":24861,"text":"Exceptions : Raises TypeError when a float value is sent in arguments."},{"code":null,"e":24940,"s":24932,"text":"Python3"},{"code":"# Python code to demonstrate working of# bin() # declare variablenum = 100 # print binary numberprint(bin(num))","e":25052,"s":24940,"text":null},{"code":null,"e":25060,"s":25052,"text":"Output:"},{"code":null,"e":25070,"s":25060,"text":"0b1100100"},{"code":null,"e":25078,"s":25070,"text":"Python3"},{"code":"# Python code to demonstrate working of# bin() # function returning binary stringdef Binary(n): s = bin(n) # removing \"0b\" prefix s1 = s[2:] return s1 print(\"The binary representation of 100 (using bin()) is : \", end=\"\")print(Binary(100))","e":25330,"s":25078,"text":null},{"code":null,"e":25339,"s":25330,"text":"Output: "},{"code":null,"e":25399,"s":25339,"text":"The binary representation of 100 (using bin()) is : 1100100"},{"code":null,"e":25620,"s":25399,"text":"Here we send the object of the class to the bin methods, and we are using python special methods __index()__ method which always returns positive integer, and it can not be a rising error if the value is not an integer. "},{"code":null,"e":25628,"s":25620,"text":"Python3"},{"code":"# Python code to demonstrate working of# bin()class number: num = 100 def __index__(self): return(self.num) print(bin(number()))","e":25771,"s":25628,"text":null},{"code":null,"e":25779,"s":25771,"text":"Output:"},{"code":null,"e":25789,"s":25779,"text":"0b1100100"},{"code":null,"e":26086,"s":25789,"text":"This article is contributed by Manjeet Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks."},{"code":null,"e":26212,"s":26086,"text":"Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "},{"code":null,"e":26225,"s":26212,"text":"kumar_satyam"},{"code":null,"e":26241,"s":26225,"text":"base-conversion"},{"code":null,"e":26267,"s":26241,"text":"Python-Built-in-functions"},{"code":null,"e":26274,"s":26267,"text":"Python"},{"code":null,"e":26372,"s":26274,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":26390,"s":26372,"text":"Python Dictionary"},{"code":null,"e":26425,"s":26390,"text":"Read a file line by line in Python"},{"code":null,"e":26447,"s":26425,"text":"Enumerate() in Python"},{"code":null,"e":26479,"s":26447,"text":"How to Install PIP on Windows ?"},{"code":null,"e":26509,"s":26479,"text":"Iterate over a list in Python"},{"code":null,"e":26551,"s":26509,"text":"Different ways to create Pandas Dataframe"},{"code":null,"e":26577,"s":26551,"text":"Python String | replace()"},{"code":null,"e":26614,"s":26577,"text":"Create a Pandas DataFrame from Lists"},{"code":null,"e":26657,"s":26614,"text":"Python program to convert a list to string"}],"string":"[\n {\n \"code\": null,\n \"e\": 24678,\n \"s\": 24650,\n \"text\": \"\\n17 Sep, 2021\"\n },\n {\n \"code\": null,\n \"e\": 24746,\n \"s\": 24678,\n \"text\": \"Python bin() function returns the binary string of a given integer.\"\n },\n {\n \"code\": null,\n \"e\": 24762,\n \"s\": 24746,\n \"text\": \"Syntax: bin(a)\"\n },\n {\n \"code\": null,\n \"e\": 24801,\n \"s\": 24762,\n \"text\": \"Parameters : a : an integer to convert\"\n },\n {\n \"code\": null,\n \"e\": 24861,\n \"s\": 24801,\n \"text\": \"Return Value : A binary string of an integer or int object.\"\n },\n {\n \"code\": null,\n \"e\": 24932,\n \"s\": 24861,\n \"text\": \"Exceptions : Raises TypeError when a float value is sent in arguments.\"\n },\n {\n \"code\": null,\n \"e\": 24940,\n \"s\": 24932,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# Python code to demonstrate working of# bin() # declare variablenum = 100 # print binary numberprint(bin(num))\",\n \"e\": 25052,\n \"s\": 24940,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25060,\n \"s\": 25052,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25070,\n \"s\": 25060,\n \"text\": \"0b1100100\"\n },\n {\n \"code\": null,\n \"e\": 25078,\n \"s\": 25070,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# Python code to demonstrate working of# bin() # function returning binary stringdef Binary(n): s = bin(n) # removing \\\"0b\\\" prefix s1 = s[2:] return s1 print(\\\"The binary representation of 100 (using bin()) is : \\\", end=\\\"\\\")print(Binary(100))\",\n \"e\": 25330,\n \"s\": 25078,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25339,\n \"s\": 25330,\n \"text\": \"Output: \"\n },\n {\n \"code\": null,\n \"e\": 25399,\n \"s\": 25339,\n \"text\": \"The binary representation of 100 (using bin()) is : 1100100\"\n },\n {\n \"code\": null,\n \"e\": 25620,\n \"s\": 25399,\n \"text\": \"Here we send the object of the class to the bin methods, and we are using python special methods __index()__ method which always returns positive integer, and it can not be a rising error if the value is not an integer. \"\n },\n {\n \"code\": null,\n \"e\": 25628,\n \"s\": 25620,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# Python code to demonstrate working of# bin()class number: num = 100 def __index__(self): return(self.num) print(bin(number()))\",\n \"e\": 25771,\n \"s\": 25628,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25779,\n \"s\": 25771,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25789,\n \"s\": 25779,\n \"text\": \"0b1100100\"\n },\n {\n \"code\": null,\n \"e\": 26086,\n \"s\": 25789,\n \"text\": \"This article is contributed by Manjeet Singh. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.\"\n },\n {\n \"code\": null,\n \"e\": 26212,\n \"s\": 26086,\n \"text\": \"Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. \"\n },\n {\n \"code\": null,\n \"e\": 26225,\n \"s\": 26212,\n \"text\": \"kumar_satyam\"\n },\n {\n \"code\": null,\n \"e\": 26241,\n \"s\": 26225,\n \"text\": \"base-conversion\"\n },\n {\n \"code\": null,\n \"e\": 26267,\n \"s\": 26241,\n \"text\": \"Python-Built-in-functions\"\n },\n {\n \"code\": null,\n \"e\": 26274,\n \"s\": 26267,\n \"text\": \"Python\"\n },\n {\n \"code\": null,\n \"e\": 26372,\n \"s\": 26274,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 26390,\n \"s\": 26372,\n \"text\": \"Python Dictionary\"\n },\n {\n \"code\": null,\n \"e\": 26425,\n \"s\": 26390,\n \"text\": \"Read a file line by line in Python\"\n },\n {\n \"code\": null,\n \"e\": 26447,\n \"s\": 26425,\n \"text\": \"Enumerate() in Python\"\n },\n {\n \"code\": null,\n \"e\": 26479,\n \"s\": 26447,\n \"text\": \"How to Install PIP on Windows ?\"\n },\n {\n \"code\": null,\n \"e\": 26509,\n \"s\": 26479,\n \"text\": \"Iterate over a list in Python\"\n },\n {\n \"code\": null,\n \"e\": 26551,\n \"s\": 26509,\n \"text\": \"Different ways to create Pandas Dataframe\"\n },\n {\n \"code\": null,\n \"e\": 26577,\n \"s\": 26551,\n \"text\": \"Python String | replace()\"\n },\n {\n \"code\": null,\n \"e\": 26614,\n \"s\": 26577,\n \"text\": \"Create a Pandas DataFrame from Lists\"\n },\n {\n \"code\": null,\n \"e\": 26657,\n \"s\": 26614,\n \"text\": \"Python program to convert a list to string\"\n }\n]"}}},{"rowIdx":523,"cells":{"title":{"kind":"string","value":"K-Nearest Neighbours (kNN) Algorithm: Common Questions and Python Implementation | by Chingis Oinar | Towards Data Science"},"text":{"kind":"string","value":"K-Nearest Neighbours is considered to be one of the most intuitive machine learning algorithms since it is simple to understand and explain. Additionally, it is quite convenient to demonstrate how everything goes visually. However, the kNN algorithm is still a common and very useful algorithm to use for a large variety of classification problems. If you are new to machine learning, make sure you test yourself on an understanding of this simple yet wonderful algorithm. There are a lot of useful sources on what it is and how it works, hence I want to go through 5 common or interesting questions you should know in my personal opinion.\nThe k-NN algorithm does more computation on test time rather than train time.\nThat is absolutely true. The idea of the kNN algorithm is to find a k-long list of samples that are close to a sample we want to classify. Therefore, the training phase is basically storing a training set, whereas while the prediction stage the algorithm looks for k-neighbours using that stored data.\nWhy do you need to scale your data for the k-NN algorithm?\nImagine a dataset having m number of “examples” and n number of “features”. There is one feature dimension having values exactly between 0 and 1. Meanwhile, there is also a feature dimension that varies from -99999 to 99999. Considering the formula of Euclidean Distance, this will affect the performance by giving higher weightage to variables having a higher magnitude.\nRead more: Why is scaling required in KNN and K-Means?\nThe k-NN algorithm can be used for imputing the missing value of both categorical and continuous variables.\nThat is true. k-NN can be used as one of many techniques when it comes to handling missing values. A new sample is imputed by determining the samples in the training set “nearest” to it and averages these nearby points to impute. A scikit learn library provides a quick and convenient way to use this technique.\nNote: NaNs are omitted while distances are calculated.\nExample:\nfrom sklearn.impute import KNNImputer# define imputerimputer = KNNImputer() #default k is 5=> n_neighbors=5# fit on the datasetimputer.fit(X)# transform the datasetXtrans = imputer.transform(X)\nThus, missing values will be replaced by the mean value of its “neighbours”.\nIs Euclidean Distance always the case?\nAlthough Euclidean Distance is the most common method used and taught, it is not always the optimal decision. In fact, it is hard to come up with the right metric just by looking at data, so I would suggest trying a set of them. However, there are some special cases. For instance, hamming distance is used in case of a categorical variable.\nRead more: 3 text distances that every data scientist should know\nWhy should we not use the KNN algorithm for large datasets?\nHere is an overview of the data flow that occurs in the KNN algorithm:\nCalculate the distances to all vectors in a training set and store themSort the calculated distancesStore the K nearest vectorsCalculate the most frequent class displayed by K nearest vectors\nCalculate the distances to all vectors in a training set and store them\nSort the calculated distances\nStore the K nearest vectors\nCalculate the most frequent class displayed by K nearest vectors\nImagine you have a very large dataset. Therefore, it is not only a bad decision to store a large amount of data but it is also computationally costly to keep calculating and sorting all the values.\nI will break down the implementation into the following stages:\nCalculate the distances to all vectors in a training set and store themSort the calculated distancesCalculate the most frequent class displayed by K nearest vectors and make a prediction\nCalculate the distances to all vectors in a training set and store them\nSort the calculated distances\nCalculate the most frequent class displayed by K nearest vectors and make a prediction\nCalculate the distances to all vectors in a training set and store them\nIt is important to note that there is a large variety of options to choose as a metric; however, I want to use Euclidean Distance as an example. It is the most common metric used to calculate distances among vectors since it is straightforward and easy to explain. The general formula is as follows:\nEuclidean Distance = sqrt(sum i to N (x1_i — x2_i)2)\nThus, let’s summarize it with the following Python code. Note: Don’t forget to import sqrt() from “math” module.\nSort the calculated distances\nFirstly, we need to calculate all the distances between a single test sample and all the samples in our training set. As distances are obtained, we should sort the list of distances and pick the “nearest” k vectors from our training set by looking at how far they are from a test sample.\nCalculate the most frequent class displayed by K nearest vectors and make a prediction\nFinally, in order to make a prediction, we should get our k “nearest” neighbours by calling our function I attached above. Thus, the only thing that is left is to count the number of occurrences of each label and pick the most frequent one.\nLet’s summarize everything by combining all the function into a separate class object. Here is a more generalized version of the code, please take time to look through it.\nComparison\nLet’s compare our implementation with the one provided by scikit learn. I am going to use a simple toy dataset that contains two predictors, which are age and salary. Thus, we want to predict if a customer is willing to purchase our product.\nI am going to skip preprocessing since it is not what I want to focus on; however, I used a train-test split technique and applied a StandardScaler afterwards. Anyways, if you are interested, I will provide the source code on Github.\nFinally, I will define both models and fit our data. Please refer to the KNN implementation provided above. I am selecting 5 as our default k value. Note that for the latter model the default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric.\nmodel=KNN(5) #our model model.fit(X_train,y_train)predictions=model.predict(X_test)#our model's predictionsfrom sklearn.neighbors import KNeighborsClassifierclassifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)#The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric.classifier.fit(X_train, y_train)y_pred = classifier.predict(X_test)\nResults\nThe figure attached above shows that both models demonstrated identical performance. The accuracy turned out to be 0.93, which is a pretty good result. The figure attached below is a visualization of our test set results. I am providing a single figure since both models are identical. However, I personally suggest using implementations that are provided already since our implementation is simple and inefficient. Moreover, it is just more convenient not to keep writing the exact same code every time.\nTo sum up, K-Nearest Neighbours is considered to be one of the most intuitive machine learning algorithms since it is simple to understand and explain. Additionally, it is quite convenient to demonstrate how everything goes visually. In this article we answered the following questions:\nthe k-NN algorithm does more computation on test time rather than train time.\nWhy do you need to scale your data for the k-NN algorithm?\nthe k-NN algorithm can be used for imputing the missing value of both categorical and continuous variables.\nIs Euclidean Distance always the case?\nWhy should we not use KNN algorithm for large datasets?\nMoreover, I provided a python implementation of the KNN algorithm in order to strengthen your understanding of what happens inside. The full implementation can be found on my Github."},"parsed":{"kind":"list like","value":[{"code":null,"e":811,"s":171,"text":"K-Nearest Neighbours is considered to be one of the most intuitive machine learning algorithms since it is simple to understand and explain. Additionally, it is quite convenient to demonstrate how everything goes visually. However, the kNN algorithm is still a common and very useful algorithm to use for a large variety of classification problems. If you are new to machine learning, make sure you test yourself on an understanding of this simple yet wonderful algorithm. There are a lot of useful sources on what it is and how it works, hence I want to go through 5 common or interesting questions you should know in my personal opinion."},{"code":null,"e":889,"s":811,"text":"The k-NN algorithm does more computation on test time rather than train time."},{"code":null,"e":1191,"s":889,"text":"That is absolutely true. The idea of the kNN algorithm is to find a k-long list of samples that are close to a sample we want to classify. Therefore, the training phase is basically storing a training set, whereas while the prediction stage the algorithm looks for k-neighbours using that stored data."},{"code":null,"e":1250,"s":1191,"text":"Why do you need to scale your data for the k-NN algorithm?"},{"code":null,"e":1622,"s":1250,"text":"Imagine a dataset having m number of “examples” and n number of “features”. There is one feature dimension having values exactly between 0 and 1. Meanwhile, there is also a feature dimension that varies from -99999 to 99999. Considering the formula of Euclidean Distance, this will affect the performance by giving higher weightage to variables having a higher magnitude."},{"code":null,"e":1677,"s":1622,"text":"Read more: Why is scaling required in KNN and K-Means?"},{"code":null,"e":1785,"s":1677,"text":"The k-NN algorithm can be used for imputing the missing value of both categorical and continuous variables."},{"code":null,"e":2097,"s":1785,"text":"That is true. k-NN can be used as one of many techniques when it comes to handling missing values. A new sample is imputed by determining the samples in the training set “nearest” to it and averages these nearby points to impute. A scikit learn library provides a quick and convenient way to use this technique."},{"code":null,"e":2152,"s":2097,"text":"Note: NaNs are omitted while distances are calculated."},{"code":null,"e":2161,"s":2152,"text":"Example:"},{"code":null,"e":2355,"s":2161,"text":"from sklearn.impute import KNNImputer# define imputerimputer = KNNImputer() #default k is 5=> n_neighbors=5# fit on the datasetimputer.fit(X)# transform the datasetXtrans = imputer.transform(X)"},{"code":null,"e":2432,"s":2355,"text":"Thus, missing values will be replaced by the mean value of its “neighbours”."},{"code":null,"e":2471,"s":2432,"text":"Is Euclidean Distance always the case?"},{"code":null,"e":2813,"s":2471,"text":"Although Euclidean Distance is the most common method used and taught, it is not always the optimal decision. In fact, it is hard to come up with the right metric just by looking at data, so I would suggest trying a set of them. However, there are some special cases. For instance, hamming distance is used in case of a categorical variable."},{"code":null,"e":2879,"s":2813,"text":"Read more: 3 text distances that every data scientist should know"},{"code":null,"e":2939,"s":2879,"text":"Why should we not use the KNN algorithm for large datasets?"},{"code":null,"e":3010,"s":2939,"text":"Here is an overview of the data flow that occurs in the KNN algorithm:"},{"code":null,"e":3202,"s":3010,"text":"Calculate the distances to all vectors in a training set and store themSort the calculated distancesStore the K nearest vectorsCalculate the most frequent class displayed by K nearest vectors"},{"code":null,"e":3274,"s":3202,"text":"Calculate the distances to all vectors in a training set and store them"},{"code":null,"e":3304,"s":3274,"text":"Sort the calculated distances"},{"code":null,"e":3332,"s":3304,"text":"Store the K nearest vectors"},{"code":null,"e":3397,"s":3332,"text":"Calculate the most frequent class displayed by K nearest vectors"},{"code":null,"e":3595,"s":3397,"text":"Imagine you have a very large dataset. Therefore, it is not only a bad decision to store a large amount of data but it is also computationally costly to keep calculating and sorting all the values."},{"code":null,"e":3659,"s":3595,"text":"I will break down the implementation into the following stages:"},{"code":null,"e":3846,"s":3659,"text":"Calculate the distances to all vectors in a training set and store themSort the calculated distancesCalculate the most frequent class displayed by K nearest vectors and make a prediction"},{"code":null,"e":3918,"s":3846,"text":"Calculate the distances to all vectors in a training set and store them"},{"code":null,"e":3948,"s":3918,"text":"Sort the calculated distances"},{"code":null,"e":4035,"s":3948,"text":"Calculate the most frequent class displayed by K nearest vectors and make a prediction"},{"code":null,"e":4107,"s":4035,"text":"Calculate the distances to all vectors in a training set and store them"},{"code":null,"e":4407,"s":4107,"text":"It is important to note that there is a large variety of options to choose as a metric; however, I want to use Euclidean Distance as an example. It is the most common metric used to calculate distances among vectors since it is straightforward and easy to explain. The general formula is as follows:"},{"code":null,"e":4460,"s":4407,"text":"Euclidean Distance = sqrt(sum i to N (x1_i — x2_i)2)"},{"code":null,"e":4573,"s":4460,"text":"Thus, let’s summarize it with the following Python code. Note: Don’t forget to import sqrt() from “math” module."},{"code":null,"e":4603,"s":4573,"text":"Sort the calculated distances"},{"code":null,"e":4891,"s":4603,"text":"Firstly, we need to calculate all the distances between a single test sample and all the samples in our training set. As distances are obtained, we should sort the list of distances and pick the “nearest” k vectors from our training set by looking at how far they are from a test sample."},{"code":null,"e":4978,"s":4891,"text":"Calculate the most frequent class displayed by K nearest vectors and make a prediction"},{"code":null,"e":5219,"s":4978,"text":"Finally, in order to make a prediction, we should get our k “nearest” neighbours by calling our function I attached above. Thus, the only thing that is left is to count the number of occurrences of each label and pick the most frequent one."},{"code":null,"e":5391,"s":5219,"text":"Let’s summarize everything by combining all the function into a separate class object. Here is a more generalized version of the code, please take time to look through it."},{"code":null,"e":5402,"s":5391,"text":"Comparison"},{"code":null,"e":5644,"s":5402,"text":"Let’s compare our implementation with the one provided by scikit learn. I am going to use a simple toy dataset that contains two predictors, which are age and salary. Thus, we want to predict if a customer is willing to purchase our product."},{"code":null,"e":5878,"s":5644,"text":"I am going to skip preprocessing since it is not what I want to focus on; however, I used a train-test split technique and applied a StandardScaler afterwards. Anyways, if you are interested, I will provide the source code on Github."},{"code":null,"e":6152,"s":5878,"text":"Finally, I will define both models and fit our data. Please refer to the KNN implementation provided above. I am selecting 5 as our default k value. Note that for the latter model the default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric."},{"code":null,"e":6550,"s":6152,"text":"model=KNN(5) #our model model.fit(X_train,y_train)predictions=model.predict(X_test)#our model's predictionsfrom sklearn.neighbors import KNeighborsClassifierclassifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)#The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric.classifier.fit(X_train, y_train)y_pred = classifier.predict(X_test)"},{"code":null,"e":6558,"s":6550,"text":"Results"},{"code":null,"e":7063,"s":6558,"text":"The figure attached above shows that both models demonstrated identical performance. The accuracy turned out to be 0.93, which is a pretty good result. The figure attached below is a visualization of our test set results. I am providing a single figure since both models are identical. However, I personally suggest using implementations that are provided already since our implementation is simple and inefficient. Moreover, it is just more convenient not to keep writing the exact same code every time."},{"code":null,"e":7350,"s":7063,"text":"To sum up, K-Nearest Neighbours is considered to be one of the most intuitive machine learning algorithms since it is simple to understand and explain. Additionally, it is quite convenient to demonstrate how everything goes visually. In this article we answered the following questions:"},{"code":null,"e":7428,"s":7350,"text":"the k-NN algorithm does more computation on test time rather than train time."},{"code":null,"e":7487,"s":7428,"text":"Why do you need to scale your data for the k-NN algorithm?"},{"code":null,"e":7595,"s":7487,"text":"the k-NN algorithm can be used for imputing the missing value of both categorical and continuous variables."},{"code":null,"e":7634,"s":7595,"text":"Is Euclidean Distance always the case?"},{"code":null,"e":7690,"s":7634,"text":"Why should we not use KNN algorithm for large datasets?"}],"string":"[\n {\n \"code\": null,\n \"e\": 811,\n \"s\": 171,\n \"text\": \"K-Nearest Neighbours is considered to be one of the most intuitive machine learning algorithms since it is simple to understand and explain. Additionally, it is quite convenient to demonstrate how everything goes visually. However, the kNN algorithm is still a common and very useful algorithm to use for a large variety of classification problems. If you are new to machine learning, make sure you test yourself on an understanding of this simple yet wonderful algorithm. There are a lot of useful sources on what it is and how it works, hence I want to go through 5 common or interesting questions you should know in my personal opinion.\"\n },\n {\n \"code\": null,\n \"e\": 889,\n \"s\": 811,\n \"text\": \"The k-NN algorithm does more computation on test time rather than train time.\"\n },\n {\n \"code\": null,\n \"e\": 1191,\n \"s\": 889,\n \"text\": \"That is absolutely true. The idea of the kNN algorithm is to find a k-long list of samples that are close to a sample we want to classify. Therefore, the training phase is basically storing a training set, whereas while the prediction stage the algorithm looks for k-neighbours using that stored data.\"\n },\n {\n \"code\": null,\n \"e\": 1250,\n \"s\": 1191,\n \"text\": \"Why do you need to scale your data for the k-NN algorithm?\"\n },\n {\n \"code\": null,\n \"e\": 1622,\n \"s\": 1250,\n \"text\": \"Imagine a dataset having m number of “examples” and n number of “features”. There is one feature dimension having values exactly between 0 and 1. Meanwhile, there is also a feature dimension that varies from -99999 to 99999. Considering the formula of Euclidean Distance, this will affect the performance by giving higher weightage to variables having a higher magnitude.\"\n },\n {\n \"code\": null,\n \"e\": 1677,\n \"s\": 1622,\n \"text\": \"Read more: Why is scaling required in KNN and K-Means?\"\n },\n {\n \"code\": null,\n \"e\": 1785,\n \"s\": 1677,\n \"text\": \"The k-NN algorithm can be used for imputing the missing value of both categorical and continuous variables.\"\n },\n {\n \"code\": null,\n \"e\": 2097,\n \"s\": 1785,\n \"text\": \"That is true. k-NN can be used as one of many techniques when it comes to handling missing values. A new sample is imputed by determining the samples in the training set “nearest” to it and averages these nearby points to impute. A scikit learn library provides a quick and convenient way to use this technique.\"\n },\n {\n \"code\": null,\n \"e\": 2152,\n \"s\": 2097,\n \"text\": \"Note: NaNs are omitted while distances are calculated.\"\n },\n {\n \"code\": null,\n \"e\": 2161,\n \"s\": 2152,\n \"text\": \"Example:\"\n },\n {\n \"code\": null,\n \"e\": 2355,\n \"s\": 2161,\n \"text\": \"from sklearn.impute import KNNImputer# define imputerimputer = KNNImputer() #default k is 5=> n_neighbors=5# fit on the datasetimputer.fit(X)# transform the datasetXtrans = imputer.transform(X)\"\n },\n {\n \"code\": null,\n \"e\": 2432,\n \"s\": 2355,\n \"text\": \"Thus, missing values will be replaced by the mean value of its “neighbours”.\"\n },\n {\n \"code\": null,\n \"e\": 2471,\n \"s\": 2432,\n \"text\": \"Is Euclidean Distance always the case?\"\n },\n {\n \"code\": null,\n \"e\": 2813,\n \"s\": 2471,\n \"text\": \"Although Euclidean Distance is the most common method used and taught, it is not always the optimal decision. In fact, it is hard to come up with the right metric just by looking at data, so I would suggest trying a set of them. However, there are some special cases. For instance, hamming distance is used in case of a categorical variable.\"\n },\n {\n \"code\": null,\n \"e\": 2879,\n \"s\": 2813,\n \"text\": \"Read more: 3 text distances that every data scientist should know\"\n },\n {\n \"code\": null,\n \"e\": 2939,\n \"s\": 2879,\n \"text\": \"Why should we not use the KNN algorithm for large datasets?\"\n },\n {\n \"code\": null,\n \"e\": 3010,\n \"s\": 2939,\n \"text\": \"Here is an overview of the data flow that occurs in the KNN algorithm:\"\n },\n {\n \"code\": null,\n \"e\": 3202,\n \"s\": 3010,\n \"text\": \"Calculate the distances to all vectors in a training set and store themSort the calculated distancesStore the K nearest vectorsCalculate the most frequent class displayed by K nearest vectors\"\n },\n {\n \"code\": null,\n \"e\": 3274,\n \"s\": 3202,\n \"text\": \"Calculate the distances to all vectors in a training set and store them\"\n },\n {\n \"code\": null,\n \"e\": 3304,\n \"s\": 3274,\n \"text\": \"Sort the calculated distances\"\n },\n {\n \"code\": null,\n \"e\": 3332,\n \"s\": 3304,\n \"text\": \"Store the K nearest vectors\"\n },\n {\n \"code\": null,\n \"e\": 3397,\n \"s\": 3332,\n \"text\": \"Calculate the most frequent class displayed by K nearest vectors\"\n },\n {\n \"code\": null,\n \"e\": 3595,\n \"s\": 3397,\n \"text\": \"Imagine you have a very large dataset. Therefore, it is not only a bad decision to store a large amount of data but it is also computationally costly to keep calculating and sorting all the values.\"\n },\n {\n \"code\": null,\n \"e\": 3659,\n \"s\": 3595,\n \"text\": \"I will break down the implementation into the following stages:\"\n },\n {\n \"code\": null,\n \"e\": 3846,\n \"s\": 3659,\n \"text\": \"Calculate the distances to all vectors in a training set and store themSort the calculated distancesCalculate the most frequent class displayed by K nearest vectors and make a prediction\"\n },\n {\n \"code\": null,\n \"e\": 3918,\n \"s\": 3846,\n \"text\": \"Calculate the distances to all vectors in a training set and store them\"\n },\n {\n \"code\": null,\n \"e\": 3948,\n \"s\": 3918,\n \"text\": \"Sort the calculated distances\"\n },\n {\n \"code\": null,\n \"e\": 4035,\n \"s\": 3948,\n \"text\": \"Calculate the most frequent class displayed by K nearest vectors and make a prediction\"\n },\n {\n \"code\": null,\n \"e\": 4107,\n \"s\": 4035,\n \"text\": \"Calculate the distances to all vectors in a training set and store them\"\n },\n {\n \"code\": null,\n \"e\": 4407,\n \"s\": 4107,\n \"text\": \"It is important to note that there is a large variety of options to choose as a metric; however, I want to use Euclidean Distance as an example. It is the most common metric used to calculate distances among vectors since it is straightforward and easy to explain. The general formula is as follows:\"\n },\n {\n \"code\": null,\n \"e\": 4460,\n \"s\": 4407,\n \"text\": \"Euclidean Distance = sqrt(sum i to N (x1_i — x2_i)2)\"\n },\n {\n \"code\": null,\n \"e\": 4573,\n \"s\": 4460,\n \"text\": \"Thus, let’s summarize it with the following Python code. Note: Don’t forget to import sqrt() from “math” module.\"\n },\n {\n \"code\": null,\n \"e\": 4603,\n \"s\": 4573,\n \"text\": \"Sort the calculated distances\"\n },\n {\n \"code\": null,\n \"e\": 4891,\n \"s\": 4603,\n \"text\": \"Firstly, we need to calculate all the distances between a single test sample and all the samples in our training set. As distances are obtained, we should sort the list of distances and pick the “nearest” k vectors from our training set by looking at how far they are from a test sample.\"\n },\n {\n \"code\": null,\n \"e\": 4978,\n \"s\": 4891,\n \"text\": \"Calculate the most frequent class displayed by K nearest vectors and make a prediction\"\n },\n {\n \"code\": null,\n \"e\": 5219,\n \"s\": 4978,\n \"text\": \"Finally, in order to make a prediction, we should get our k “nearest” neighbours by calling our function I attached above. Thus, the only thing that is left is to count the number of occurrences of each label and pick the most frequent one.\"\n },\n {\n \"code\": null,\n \"e\": 5391,\n \"s\": 5219,\n \"text\": \"Let’s summarize everything by combining all the function into a separate class object. Here is a more generalized version of the code, please take time to look through it.\"\n },\n {\n \"code\": null,\n \"e\": 5402,\n \"s\": 5391,\n \"text\": \"Comparison\"\n },\n {\n \"code\": null,\n \"e\": 5644,\n \"s\": 5402,\n \"text\": \"Let’s compare our implementation with the one provided by scikit learn. I am going to use a simple toy dataset that contains two predictors, which are age and salary. Thus, we want to predict if a customer is willing to purchase our product.\"\n },\n {\n \"code\": null,\n \"e\": 5878,\n \"s\": 5644,\n \"text\": \"I am going to skip preprocessing since it is not what I want to focus on; however, I used a train-test split technique and applied a StandardScaler afterwards. Anyways, if you are interested, I will provide the source code on Github.\"\n },\n {\n \"code\": null,\n \"e\": 6152,\n \"s\": 5878,\n \"text\": \"Finally, I will define both models and fit our data. Please refer to the KNN implementation provided above. I am selecting 5 as our default k value. Note that for the latter model the default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric.\"\n },\n {\n \"code\": null,\n \"e\": 6550,\n \"s\": 6152,\n \"text\": \"model=KNN(5) #our model model.fit(X_train,y_train)predictions=model.predict(X_test)#our model's predictionsfrom sklearn.neighbors import KNeighborsClassifierclassifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)#The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric.classifier.fit(X_train, y_train)y_pred = classifier.predict(X_test)\"\n },\n {\n \"code\": null,\n \"e\": 6558,\n \"s\": 6550,\n \"text\": \"Results\"\n },\n {\n \"code\": null,\n \"e\": 7063,\n \"s\": 6558,\n \"text\": \"The figure attached above shows that both models demonstrated identical performance. The accuracy turned out to be 0.93, which is a pretty good result. The figure attached below is a visualization of our test set results. I am providing a single figure since both models are identical. However, I personally suggest using implementations that are provided already since our implementation is simple and inefficient. Moreover, it is just more convenient not to keep writing the exact same code every time.\"\n },\n {\n \"code\": null,\n \"e\": 7350,\n \"s\": 7063,\n \"text\": \"To sum up, K-Nearest Neighbours is considered to be one of the most intuitive machine learning algorithms since it is simple to understand and explain. Additionally, it is quite convenient to demonstrate how everything goes visually. In this article we answered the following questions:\"\n },\n {\n \"code\": null,\n \"e\": 7428,\n \"s\": 7350,\n \"text\": \"the k-NN algorithm does more computation on test time rather than train time.\"\n },\n {\n \"code\": null,\n \"e\": 7487,\n \"s\": 7428,\n \"text\": \"Why do you need to scale your data for the k-NN algorithm?\"\n },\n {\n \"code\": null,\n \"e\": 7595,\n \"s\": 7487,\n \"text\": \"the k-NN algorithm can be used for imputing the missing value of both categorical and continuous variables.\"\n },\n {\n \"code\": null,\n \"e\": 7634,\n \"s\": 7595,\n \"text\": \"Is Euclidean Distance always the case?\"\n },\n {\n \"code\": null,\n \"e\": 7690,\n \"s\": 7634,\n \"text\": \"Why should we not use KNN algorithm for large datasets?\"\n }\n]"}}},{"rowIdx":524,"cells":{"title":{"kind":"string","value":"Best Practices for Airflow Developers | Towards Data Science"},"text":{"kind":"string","value":"Apache Airflow is one of the most popular open-source data orchestration frameworks for building and scheduling batch-based pipelines. To master the art of ETL with Airflow, it is critical to learn how to efficiently develop data pipelines by properly utilizing built-in features, adopting DevOps strategies, and automating testing and monitoring. In this blog post, I will provide several tips and best practices for developing and monitoring data pipelines using Airflow. As always, I will explain the underlying mechanisms of Airflow to help you understand the “why” behind each tip.\n(New to Airflow? Read the beginner’s guide to Airflow first.)\n(Looking for more Airflow tips? Check out Apache Airflow Tips and Best Practices.)\nMacros\nAirflow has powerful built-in support for Jinja templating, which lets developers use many useful variables/macros, such as execution timestamp and task details, at runtime. An important use case of macros is to ensure your DAGs are idempotent, which I explain in detail in my previous blog post. Since macros allow users to retrieve runtime information at task run level, another great use case of macros is for job alerts, which I will demonstrate with examples in a later section.\nHowever, not all operator parameters are templated, so you need to make sure Jinja templating is enabled for the operators that you plan to pass macros to. To check which parameters in an operator take macros as arguments, look for the template_fields attribute in the operator source code. For example, as of today, the most recent version of PythonOperator has three templated parameters: ‘templates_dict’, ‘op_args’, and ‘op_kwargs’:\ntemplate_fields = ('templates_dict', 'op_args', 'op_kwargs')\nIn order to enable templating for more parameters, simply overwrite thetemplate_fields attribute. Since this attribute is an immutable tuple, make sure to include the original list of templated parameters when you overwrite it.\nDAG factory\nEven building the simplest DAG in Airflow still requires writing~30 lines of Python code, in addition to knowledge of Airflow basics. If a large group of non-engineer members in the data organization build and deploy pipelines daily (most of which are very simple workflows with minimal dependency between tasks), then these employees need to invest a lot of time learning to write low-level scripts using the apache-airflow library. In this case, creating a high-level wrapper on top of Airflow’s native Python library (aka a DAG factory) will allow them to build simple data pipelines using fewer lines of code and without having in-depth knowledge of Airflow, which in turn saves time and resources.\nAs an example, below is a DAG factory class that returns a DAG that runs all SQL scripts in a folder at a given schedule:\nIf someone with zero knowledge on Airflow wants to schedule a couple of SQL queries to run daily, now they only need to put those SQL files in a folder and write the 2-line code below:\nimport DagFactorydag = DagFactory('jon_snow_sql_dag', 'Jon Snow', '2020-02-19', '@daily', __file__).sql_dag()\nThe DagFactory here is a very straightforward implementation, but you can enrich it based on your organization’s use cases; for example, adding a data lineage and qualify check, or allowing custom task dependency.\nHow does this work behind the scenes? Though the file that defines DagFactory is present in Airflow’s DAG folder, no actual DAG exists until a DagFactory instance is initialized with parameters in a DAG file. In the example above, only when dag is created in the global namespace, Airflow is able to pick it up, as it only recognizes DAG objects inglobals().\nAutomated Tests\nAfter workflow files are uploaded to Airflow’s DAG folder, the Airflow scheduler will try to quickly compile all the files and validate all DAG definitions, e.g. checking whether there are any loops in task dependencies. Therefore, even if your IDE does not report any compiling errors, Airflow might still reject your DAGs at runtime. To catch Airflow exceptions ahead of time before deployment, you need a pytest function to ensure all DAG files are valid:\nOption #1: Use importlib\n# DAG_FOLDER_PATH: the relative path to the DAG folder# DAG_FOLDER_NAME: the name of the DAG folderdef validate_dags_1(): for dag_file in DAG_FOLDER_PATH.iterdir(): if dag_file.is_file() and dag_file.name.endswith('.py'): import_path = f'{DAG_FOLDER_NAME}.{dag_file.name[:-3]}' importlib.import_module(import_path)\nOption #2: Use DagBag\nfrom airflow.models import DagBagdef validate_dags_2(): dagbag = DagBag() assert len(dagbag.import_errors) == 0, f'DAG failures: {dagbag.import_errors}'\nContinuous Deployment (CD)\nAn important but easy way to boost your team’s Airflow development efficiency is by adopting Continuous Deployment (CD) in your engineering workflow, which means whenever a set of changes is committed to the Airflow repository and passes automated tests (if any), these changes will automatically be deployed to Airflow’s DAG folder. There are a lot of DevOps automation tools that can help you achieve this easily:\nJenkins\nDrone\nGitHub Actions\nReal-time failure alerts\nIf there are tons of Airflow DAGs running at high frequency on your Airflow cluster, users would love to get notified whenever a task fails rather than manually checking task status. Luckily, Airflow supports a handy parameter: on_failure_callback, which will trigger a user-provided callback function with a context dictionary full of task run information. For example, below is a callback function that sends a detailed Slack alert upon task failure:\nTo set up the failure alert for a DAG, set callback function in default_args:\ndefault_args = { 'owner': 'Xinran Waibel', 'start_date': datetime.datetime(2020, 2, 20), 'on_failure_callback': slack_failure_msg}\n(TL;DR) Here is a digest of key takeaways from this blog post:\nUse macros to build idempotent DAGs and provide relevant error information in job alerts. Remember you can use thetemplate_fields attribute to enable templating for any operator parameters.\nCreate a DAG factory to allow users to create DAGs with efficiency.\nTest DAG definitions using importlib or DagBag so that you can deploy workflows with confidence later.\nImplement CD for Airflow repository using automation tools like Jenkins.\nMake good use of on_failure_callback to send real-time failure alerts.\nHappy learning and see you next time!\nWant to learn more about Data Engineering? Check out my Data Engineering 101 column on Towards Data Science:"},"parsed":{"kind":"list like","value":[{"code":null,"e":759,"s":172,"text":"Apache Airflow is one of the most popular open-source data orchestration frameworks for building and scheduling batch-based pipelines. To master the art of ETL with Airflow, it is critical to learn how to efficiently develop data pipelines by properly utilizing built-in features, adopting DevOps strategies, and automating testing and monitoring. In this blog post, I will provide several tips and best practices for developing and monitoring data pipelines using Airflow. As always, I will explain the underlying mechanisms of Airflow to help you understand the “why” behind each tip."},{"code":null,"e":821,"s":759,"text":"(New to Airflow? Read the beginner’s guide to Airflow first.)"},{"code":null,"e":904,"s":821,"text":"(Looking for more Airflow tips? Check out Apache Airflow Tips and Best Practices.)"},{"code":null,"e":911,"s":904,"text":"Macros"},{"code":null,"e":1395,"s":911,"text":"Airflow has powerful built-in support for Jinja templating, which lets developers use many useful variables/macros, such as execution timestamp and task details, at runtime. An important use case of macros is to ensure your DAGs are idempotent, which I explain in detail in my previous blog post. Since macros allow users to retrieve runtime information at task run level, another great use case of macros is for job alerts, which I will demonstrate with examples in a later section."},{"code":null,"e":1832,"s":1395,"text":"However, not all operator parameters are templated, so you need to make sure Jinja templating is enabled for the operators that you plan to pass macros to. To check which parameters in an operator take macros as arguments, look for the template_fields attribute in the operator source code. For example, as of today, the most recent version of PythonOperator has three templated parameters: ‘templates_dict’, ‘op_args’, and ‘op_kwargs’:"},{"code":null,"e":1893,"s":1832,"text":"template_fields = ('templates_dict', 'op_args', 'op_kwargs')"},{"code":null,"e":2121,"s":1893,"text":"In order to enable templating for more parameters, simply overwrite thetemplate_fields attribute. Since this attribute is an immutable tuple, make sure to include the original list of templated parameters when you overwrite it."},{"code":null,"e":2133,"s":2121,"text":"DAG factory"},{"code":null,"e":2836,"s":2133,"text":"Even building the simplest DAG in Airflow still requires writing~30 lines of Python code, in addition to knowledge of Airflow basics. If a large group of non-engineer members in the data organization build and deploy pipelines daily (most of which are very simple workflows with minimal dependency between tasks), then these employees need to invest a lot of time learning to write low-level scripts using the apache-airflow library. In this case, creating a high-level wrapper on top of Airflow’s native Python library (aka a DAG factory) will allow them to build simple data pipelines using fewer lines of code and without having in-depth knowledge of Airflow, which in turn saves time and resources."},{"code":null,"e":2958,"s":2836,"text":"As an example, below is a DAG factory class that returns a DAG that runs all SQL scripts in a folder at a given schedule:"},{"code":null,"e":3143,"s":2958,"text":"If someone with zero knowledge on Airflow wants to schedule a couple of SQL queries to run daily, now they only need to put those SQL files in a folder and write the 2-line code below:"},{"code":null,"e":3253,"s":3143,"text":"import DagFactorydag = DagFactory('jon_snow_sql_dag', 'Jon Snow', '2020-02-19', '@daily', __file__).sql_dag()"},{"code":null,"e":3467,"s":3253,"text":"The DagFactory here is a very straightforward implementation, but you can enrich it based on your organization’s use cases; for example, adding a data lineage and qualify check, or allowing custom task dependency."},{"code":null,"e":3826,"s":3467,"text":"How does this work behind the scenes? Though the file that defines DagFactory is present in Airflow’s DAG folder, no actual DAG exists until a DagFactory instance is initialized with parameters in a DAG file. In the example above, only when dag is created in the global namespace, Airflow is able to pick it up, as it only recognizes DAG objects inglobals()."},{"code":null,"e":3842,"s":3826,"text":"Automated Tests"},{"code":null,"e":4301,"s":3842,"text":"After workflow files are uploaded to Airflow’s DAG folder, the Airflow scheduler will try to quickly compile all the files and validate all DAG definitions, e.g. checking whether there are any loops in task dependencies. Therefore, even if your IDE does not report any compiling errors, Airflow might still reject your DAGs at runtime. To catch Airflow exceptions ahead of time before deployment, you need a pytest function to ensure all DAG files are valid:"},{"code":null,"e":4326,"s":4301,"text":"Option #1: Use importlib"},{"code":null,"e":4678,"s":4326,"text":"# DAG_FOLDER_PATH: the relative path to the DAG folder# DAG_FOLDER_NAME: the name of the DAG folderdef validate_dags_1(): for dag_file in DAG_FOLDER_PATH.iterdir(): if dag_file.is_file() and dag_file.name.endswith('.py'): import_path = f'{DAG_FOLDER_NAME}.{dag_file.name[:-3]}' importlib.import_module(import_path)"},{"code":null,"e":4700,"s":4678,"text":"Option #2: Use DagBag"},{"code":null,"e":4855,"s":4700,"text":"from airflow.models import DagBagdef validate_dags_2(): dagbag = DagBag() assert len(dagbag.import_errors) == 0, f'DAG failures: {dagbag.import_errors}'"},{"code":null,"e":4882,"s":4855,"text":"Continuous Deployment (CD)"},{"code":null,"e":5298,"s":4882,"text":"An important but easy way to boost your team’s Airflow development efficiency is by adopting Continuous Deployment (CD) in your engineering workflow, which means whenever a set of changes is committed to the Airflow repository and passes automated tests (if any), these changes will automatically be deployed to Airflow’s DAG folder. There are a lot of DevOps automation tools that can help you achieve this easily:"},{"code":null,"e":5306,"s":5298,"text":"Jenkins"},{"code":null,"e":5312,"s":5306,"text":"Drone"},{"code":null,"e":5327,"s":5312,"text":"GitHub Actions"},{"code":null,"e":5352,"s":5327,"text":"Real-time failure alerts"},{"code":null,"e":5805,"s":5352,"text":"If there are tons of Airflow DAGs running at high frequency on your Airflow cluster, users would love to get notified whenever a task fails rather than manually checking task status. Luckily, Airflow supports a handy parameter: on_failure_callback, which will trigger a user-provided callback function with a context dictionary full of task run information. For example, below is a callback function that sends a detailed Slack alert upon task failure:"},{"code":null,"e":5883,"s":5805,"text":"To set up the failure alert for a DAG, set callback function in default_args:"},{"code":null,"e":6023,"s":5883,"text":"default_args = { 'owner': 'Xinran Waibel', 'start_date': datetime.datetime(2020, 2, 20), 'on_failure_callback': slack_failure_msg}"},{"code":null,"e":6086,"s":6023,"text":"(TL;DR) Here is a digest of key takeaways from this blog post:"},{"code":null,"e":6276,"s":6086,"text":"Use macros to build idempotent DAGs and provide relevant error information in job alerts. Remember you can use thetemplate_fields attribute to enable templating for any operator parameters."},{"code":null,"e":6344,"s":6276,"text":"Create a DAG factory to allow users to create DAGs with efficiency."},{"code":null,"e":6447,"s":6344,"text":"Test DAG definitions using importlib or DagBag so that you can deploy workflows with confidence later."},{"code":null,"e":6520,"s":6447,"text":"Implement CD for Airflow repository using automation tools like Jenkins."},{"code":null,"e":6591,"s":6520,"text":"Make good use of on_failure_callback to send real-time failure alerts."},{"code":null,"e":6629,"s":6591,"text":"Happy learning and see you next time!"}],"string":"[\n {\n \"code\": null,\n \"e\": 759,\n \"s\": 172,\n \"text\": \"Apache Airflow is one of the most popular open-source data orchestration frameworks for building and scheduling batch-based pipelines. To master the art of ETL with Airflow, it is critical to learn how to efficiently develop data pipelines by properly utilizing built-in features, adopting DevOps strategies, and automating testing and monitoring. In this blog post, I will provide several tips and best practices for developing and monitoring data pipelines using Airflow. As always, I will explain the underlying mechanisms of Airflow to help you understand the “why” behind each tip.\"\n },\n {\n \"code\": null,\n \"e\": 821,\n \"s\": 759,\n \"text\": \"(New to Airflow? Read the beginner’s guide to Airflow first.)\"\n },\n {\n \"code\": null,\n \"e\": 904,\n \"s\": 821,\n \"text\": \"(Looking for more Airflow tips? Check out Apache Airflow Tips and Best Practices.)\"\n },\n {\n \"code\": null,\n \"e\": 911,\n \"s\": 904,\n \"text\": \"Macros\"\n },\n {\n \"code\": null,\n \"e\": 1395,\n \"s\": 911,\n \"text\": \"Airflow has powerful built-in support for Jinja templating, which lets developers use many useful variables/macros, such as execution timestamp and task details, at runtime. An important use case of macros is to ensure your DAGs are idempotent, which I explain in detail in my previous blog post. Since macros allow users to retrieve runtime information at task run level, another great use case of macros is for job alerts, which I will demonstrate with examples in a later section.\"\n },\n {\n \"code\": null,\n \"e\": 1832,\n \"s\": 1395,\n \"text\": \"However, not all operator parameters are templated, so you need to make sure Jinja templating is enabled for the operators that you plan to pass macros to. To check which parameters in an operator take macros as arguments, look for the template_fields attribute in the operator source code. For example, as of today, the most recent version of PythonOperator has three templated parameters: ‘templates_dict’, ‘op_args’, and ‘op_kwargs’:\"\n },\n {\n \"code\": null,\n \"e\": 1893,\n \"s\": 1832,\n \"text\": \"template_fields = ('templates_dict', 'op_args', 'op_kwargs')\"\n },\n {\n \"code\": null,\n \"e\": 2121,\n \"s\": 1893,\n \"text\": \"In order to enable templating for more parameters, simply overwrite thetemplate_fields attribute. Since this attribute is an immutable tuple, make sure to include the original list of templated parameters when you overwrite it.\"\n },\n {\n \"code\": null,\n \"e\": 2133,\n \"s\": 2121,\n \"text\": \"DAG factory\"\n },\n {\n \"code\": null,\n \"e\": 2836,\n \"s\": 2133,\n \"text\": \"Even building the simplest DAG in Airflow still requires writing~30 lines of Python code, in addition to knowledge of Airflow basics. If a large group of non-engineer members in the data organization build and deploy pipelines daily (most of which are very simple workflows with minimal dependency between tasks), then these employees need to invest a lot of time learning to write low-level scripts using the apache-airflow library. In this case, creating a high-level wrapper on top of Airflow’s native Python library (aka a DAG factory) will allow them to build simple data pipelines using fewer lines of code and without having in-depth knowledge of Airflow, which in turn saves time and resources.\"\n },\n {\n \"code\": null,\n \"e\": 2958,\n \"s\": 2836,\n \"text\": \"As an example, below is a DAG factory class that returns a DAG that runs all SQL scripts in a folder at a given schedule:\"\n },\n {\n \"code\": null,\n \"e\": 3143,\n \"s\": 2958,\n \"text\": \"If someone with zero knowledge on Airflow wants to schedule a couple of SQL queries to run daily, now they only need to put those SQL files in a folder and write the 2-line code below:\"\n },\n {\n \"code\": null,\n \"e\": 3253,\n \"s\": 3143,\n \"text\": \"import DagFactorydag = DagFactory('jon_snow_sql_dag', 'Jon Snow', '2020-02-19', '@daily', __file__).sql_dag()\"\n },\n {\n \"code\": null,\n \"e\": 3467,\n \"s\": 3253,\n \"text\": \"The DagFactory here is a very straightforward implementation, but you can enrich it based on your organization’s use cases; for example, adding a data lineage and qualify check, or allowing custom task dependency.\"\n },\n {\n \"code\": null,\n \"e\": 3826,\n \"s\": 3467,\n \"text\": \"How does this work behind the scenes? Though the file that defines DagFactory is present in Airflow’s DAG folder, no actual DAG exists until a DagFactory instance is initialized with parameters in a DAG file. In the example above, only when dag is created in the global namespace, Airflow is able to pick it up, as it only recognizes DAG objects inglobals().\"\n },\n {\n \"code\": null,\n \"e\": 3842,\n \"s\": 3826,\n \"text\": \"Automated Tests\"\n },\n {\n \"code\": null,\n \"e\": 4301,\n \"s\": 3842,\n \"text\": \"After workflow files are uploaded to Airflow’s DAG folder, the Airflow scheduler will try to quickly compile all the files and validate all DAG definitions, e.g. checking whether there are any loops in task dependencies. Therefore, even if your IDE does not report any compiling errors, Airflow might still reject your DAGs at runtime. To catch Airflow exceptions ahead of time before deployment, you need a pytest function to ensure all DAG files are valid:\"\n },\n {\n \"code\": null,\n \"e\": 4326,\n \"s\": 4301,\n \"text\": \"Option #1: Use importlib\"\n },\n {\n \"code\": null,\n \"e\": 4678,\n \"s\": 4326,\n \"text\": \"# DAG_FOLDER_PATH: the relative path to the DAG folder# DAG_FOLDER_NAME: the name of the DAG folderdef validate_dags_1(): for dag_file in DAG_FOLDER_PATH.iterdir(): if dag_file.is_file() and dag_file.name.endswith('.py'): import_path = f'{DAG_FOLDER_NAME}.{dag_file.name[:-3]}' importlib.import_module(import_path)\"\n },\n {\n \"code\": null,\n \"e\": 4700,\n \"s\": 4678,\n \"text\": \"Option #2: Use DagBag\"\n },\n {\n \"code\": null,\n \"e\": 4855,\n \"s\": 4700,\n \"text\": \"from airflow.models import DagBagdef validate_dags_2(): dagbag = DagBag() assert len(dagbag.import_errors) == 0, f'DAG failures: {dagbag.import_errors}'\"\n },\n {\n \"code\": null,\n \"e\": 4882,\n \"s\": 4855,\n \"text\": \"Continuous Deployment (CD)\"\n },\n {\n \"code\": null,\n \"e\": 5298,\n \"s\": 4882,\n \"text\": \"An important but easy way to boost your team’s Airflow development efficiency is by adopting Continuous Deployment (CD) in your engineering workflow, which means whenever a set of changes is committed to the Airflow repository and passes automated tests (if any), these changes will automatically be deployed to Airflow’s DAG folder. There are a lot of DevOps automation tools that can help you achieve this easily:\"\n },\n {\n \"code\": null,\n \"e\": 5306,\n \"s\": 5298,\n \"text\": \"Jenkins\"\n },\n {\n \"code\": null,\n \"e\": 5312,\n \"s\": 5306,\n \"text\": \"Drone\"\n },\n {\n \"code\": null,\n \"e\": 5327,\n \"s\": 5312,\n \"text\": \"GitHub Actions\"\n },\n {\n \"code\": null,\n \"e\": 5352,\n \"s\": 5327,\n \"text\": \"Real-time failure alerts\"\n },\n {\n \"code\": null,\n \"e\": 5805,\n \"s\": 5352,\n \"text\": \"If there are tons of Airflow DAGs running at high frequency on your Airflow cluster, users would love to get notified whenever a task fails rather than manually checking task status. Luckily, Airflow supports a handy parameter: on_failure_callback, which will trigger a user-provided callback function with a context dictionary full of task run information. For example, below is a callback function that sends a detailed Slack alert upon task failure:\"\n },\n {\n \"code\": null,\n \"e\": 5883,\n \"s\": 5805,\n \"text\": \"To set up the failure alert for a DAG, set callback function in default_args:\"\n },\n {\n \"code\": null,\n \"e\": 6023,\n \"s\": 5883,\n \"text\": \"default_args = { 'owner': 'Xinran Waibel', 'start_date': datetime.datetime(2020, 2, 20), 'on_failure_callback': slack_failure_msg}\"\n },\n {\n \"code\": null,\n \"e\": 6086,\n \"s\": 6023,\n \"text\": \"(TL;DR) Here is a digest of key takeaways from this blog post:\"\n },\n {\n \"code\": null,\n \"e\": 6276,\n \"s\": 6086,\n \"text\": \"Use macros to build idempotent DAGs and provide relevant error information in job alerts. Remember you can use thetemplate_fields attribute to enable templating for any operator parameters.\"\n },\n {\n \"code\": null,\n \"e\": 6344,\n \"s\": 6276,\n \"text\": \"Create a DAG factory to allow users to create DAGs with efficiency.\"\n },\n {\n \"code\": null,\n \"e\": 6447,\n \"s\": 6344,\n \"text\": \"Test DAG definitions using importlib or DagBag so that you can deploy workflows with confidence later.\"\n },\n {\n \"code\": null,\n \"e\": 6520,\n \"s\": 6447,\n \"text\": \"Implement CD for Airflow repository using automation tools like Jenkins.\"\n },\n {\n \"code\": null,\n \"e\": 6591,\n \"s\": 6520,\n \"text\": \"Make good use of on_failure_callback to send real-time failure alerts.\"\n },\n {\n \"code\": null,\n \"e\": 6629,\n \"s\": 6591,\n \"text\": \"Happy learning and see you next time!\"\n }\n]"}}},{"rowIdx":525,"cells":{"title":{"kind":"string","value":"Minimize Rounding Error to Meet Target in C++"},"text":{"kind":"string","value":"Suppose we have an array of prices P [p1,p2...,pn] and a target value, we have to round each price Pi to Roundi(Pi) so that the rounded array [Round1(P1),Round2(P2)...,Roundn(Pn)] sums to the given target value. Here each operation Roundi(pi) could be either Floor(Pi) or Ceil(Pi).\nWe have to return the string \"-1\" if the rounded array is impossible to sum to target. Otherwise, return the smallest rounding error, which will be (as a string with three places after the decimal) defined as −\n∑i−1n|Roundi(????)−????\nSo if the input is like [“0.700”, “2.800”, “4.900”], and the target is 8. Use floor or ceil operations to get (0.7 - 0) + (3 - 2.8) + (5 - 4.9) = 0.7 + 0.2 + 0.1 = 1.0\nTo solve this, we will follow these steps −\nret := 0\nret := 0\nmake one priority queue pq for (double and array) type complex data\nmake one priority queue pq for (double and array) type complex data\nfor i in range 0 to size of pricesx := double value of prices[i]low := floor of xhigh := ceiling of xif low is not highdiff := (high - x) – (x - low)insert diff into pqtarget := target – lowret := ret + (x - low)\nfor i in range 0 to size of prices\nx := double value of prices[i]\nx := double value of prices[i]\nlow := floor of x\nlow := floor of x\nhigh := ceiling of x\nhigh := ceiling of x\nif low is not highdiff := (high - x) – (x - low)insert diff into pq\nif low is not high\ndiff := (high - x) – (x - low)\ndiff := (high - x) – (x - low)\ninsert diff into pq\ninsert diff into pq\ntarget := target – low\ntarget := target – low\nret := ret + (x - low)\nret := ret + (x - low)\nif target > size of pq or target < 0, then return “-1”\nif target > size of pq or target < 0, then return “-1”\nwhile target is not 0ret := ret + top of pq, delete from pqddecrease target by 1\nwhile target is not 0\nret := ret + top of pq, delete from pq\nret := ret + top of pq, delete from pq\nd\ndecrease target by 1\ndecrease target by 1\ns := ret as string\ns := ret as string\nreturn substring s by taking number up to three decimal places\nreturn substring s by taking number up to three decimal places\nLet us see the following implementation to get better understanding −\n Live Demo\n#include \nusing namespace std;\nstruct Comparator{\n bool operator()(double a, double b) {\n return !(a < b);\n }\n};\nclass Solution {\n public:\n string minimizeError(vector& prices, int target) {\n double ret = 0;\n priority_queue < double, vector < double >, Comparator > pq;\n for(int i = 0; i < prices.size(); i++){\n double x = stod(prices[i]);\n double low = floor(x);\n double high = ceil(x);\n if(low != high){\n double diff = ((high - x) - (x - low));\n pq.push(diff);\n }\n target -= low;\n ret += (x - low);\n }\n if(target > pq.size() || target < 0) return \"-1\";\n while(target--){\n ret += pq.top();\n pq.pop();\n }\n string s = to_string (ret);\n return s.substr (0, s.find_first_of ('.', 0) + 4);\n }\n};\nmain(){\n vector v = {\"0.700\",\"2.800\",\"4.900\"};\n Solution ob;\n cout << (ob.minimizeError(v, 8));\n}\n[\"0.700\",\"2.800\",\"4.900\"]\n8\n\"1.000\""},"parsed":{"kind":"list like","value":[{"code":null,"e":1344,"s":1062,"text":"Suppose we have an array of prices P [p1,p2...,pn] and a target value, we have to round each price Pi to Roundi(Pi) so that the rounded array [Round1(P1),Round2(P2)...,Roundn(Pn)] sums to the given target value. Here each operation Roundi(pi) could be either Floor(Pi) or Ceil(Pi)."},{"code":null,"e":1555,"s":1344,"text":"We have to return the string \"-1\" if the rounded array is impossible to sum to target. Otherwise, return the smallest rounding error, which will be (as a string with three places after the decimal) defined as −"},{"code":null,"e":1579,"s":1555,"text":"∑i−1n|Roundi(????)−????"},{"code":null,"e":1747,"s":1579,"text":"So if the input is like [“0.700”, “2.800”, “4.900”], and the target is 8. Use floor or ceil operations to get (0.7 - 0) + (3 - 2.8) + (5 - 4.9) = 0.7 + 0.2 + 0.1 = 1.0"},{"code":null,"e":1791,"s":1747,"text":"To solve this, we will follow these steps −"},{"code":null,"e":1800,"s":1791,"text":"ret := 0"},{"code":null,"e":1809,"s":1800,"text":"ret := 0"},{"code":null,"e":1877,"s":1809,"text":"make one priority queue pq for (double and array) type complex data"},{"code":null,"e":1945,"s":1877,"text":"make one priority queue pq for (double and array) type complex data"},{"code":null,"e":2158,"s":1945,"text":"for i in range 0 to size of pricesx := double value of prices[i]low := floor of xhigh := ceiling of xif low is not highdiff := (high - x) – (x - low)insert diff into pqtarget := target – lowret := ret + (x - low)"},{"code":null,"e":2193,"s":2158,"text":"for i in range 0 to size of prices"},{"code":null,"e":2224,"s":2193,"text":"x := double value of prices[i]"},{"code":null,"e":2255,"s":2224,"text":"x := double value of prices[i]"},{"code":null,"e":2273,"s":2255,"text":"low := floor of x"},{"code":null,"e":2291,"s":2273,"text":"low := floor of x"},{"code":null,"e":2312,"s":2291,"text":"high := ceiling of x"},{"code":null,"e":2333,"s":2312,"text":"high := ceiling of x"},{"code":null,"e":2401,"s":2333,"text":"if low is not highdiff := (high - x) – (x - low)insert diff into pq"},{"code":null,"e":2420,"s":2401,"text":"if low is not high"},{"code":null,"e":2451,"s":2420,"text":"diff := (high - x) – (x - low)"},{"code":null,"e":2482,"s":2451,"text":"diff := (high - x) – (x - low)"},{"code":null,"e":2502,"s":2482,"text":"insert diff into pq"},{"code":null,"e":2522,"s":2502,"text":"insert diff into pq"},{"code":null,"e":2545,"s":2522,"text":"target := target – low"},{"code":null,"e":2568,"s":2545,"text":"target := target – low"},{"code":null,"e":2591,"s":2568,"text":"ret := ret + (x - low)"},{"code":null,"e":2614,"s":2591,"text":"ret := ret + (x - low)"},{"code":null,"e":2669,"s":2614,"text":"if target > size of pq or target < 0, then return “-1”"},{"code":null,"e":2724,"s":2669,"text":"if target > size of pq or target < 0, then return “-1”"},{"code":null,"e":2805,"s":2724,"text":"while target is not 0ret := ret + top of pq, delete from pqddecrease target by 1"},{"code":null,"e":2827,"s":2805,"text":"while target is not 0"},{"code":null,"e":2866,"s":2827,"text":"ret := ret + top of pq, delete from pq"},{"code":null,"e":2905,"s":2866,"text":"ret := ret + top of pq, delete from pq"},{"code":null,"e":2907,"s":2905,"text":"d"},{"code":null,"e":2928,"s":2907,"text":"decrease target by 1"},{"code":null,"e":2949,"s":2928,"text":"decrease target by 1"},{"code":null,"e":2968,"s":2949,"text":"s := ret as string"},{"code":null,"e":2987,"s":2968,"text":"s := ret as string"},{"code":null,"e":3050,"s":2987,"text":"return substring s by taking number up to three decimal places"},{"code":null,"e":3113,"s":3050,"text":"return substring s by taking number up to three decimal places"},{"code":null,"e":3183,"s":3113,"text":"Let us see the following implementation to get better understanding −"},{"code":null,"e":3194,"s":3183,"text":" Live Demo"},{"code":null,"e":4175,"s":3194,"text":"#include \nusing namespace std;\nstruct Comparator{\n bool operator()(double a, double b) {\n return !(a < b);\n }\n};\nclass Solution {\n public:\n string minimizeError(vector& prices, int target) {\n double ret = 0;\n priority_queue < double, vector < double >, Comparator > pq;\n for(int i = 0; i < prices.size(); i++){\n double x = stod(prices[i]);\n double low = floor(x);\n double high = ceil(x);\n if(low != high){\n double diff = ((high - x) - (x - low));\n pq.push(diff);\n }\n target -= low;\n ret += (x - low);\n }\n if(target > pq.size() || target < 0) return \"-1\";\n while(target--){\n ret += pq.top();\n pq.pop();\n }\n string s = to_string (ret);\n return s.substr (0, s.find_first_of ('.', 0) + 4);\n }\n};\nmain(){\n vector v = {\"0.700\",\"2.800\",\"4.900\"};\n Solution ob;\n cout << (ob.minimizeError(v, 8));\n}"},{"code":null,"e":4203,"s":4175,"text":"[\"0.700\",\"2.800\",\"4.900\"]\n8"},{"code":null,"e":4211,"s":4203,"text":"\"1.000\""}],"string":"[\n {\n \"code\": null,\n \"e\": 1344,\n \"s\": 1062,\n \"text\": \"Suppose we have an array of prices P [p1,p2...,pn] and a target value, we have to round each price Pi to Roundi(Pi) so that the rounded array [Round1(P1),Round2(P2)...,Roundn(Pn)] sums to the given target value. Here each operation Roundi(pi) could be either Floor(Pi) or Ceil(Pi).\"\n },\n {\n \"code\": null,\n \"e\": 1555,\n \"s\": 1344,\n \"text\": \"We have to return the string \\\"-1\\\" if the rounded array is impossible to sum to target. Otherwise, return the smallest rounding error, which will be (as a string with three places after the decimal) defined as −\"\n },\n {\n \"code\": null,\n \"e\": 1579,\n \"s\": 1555,\n \"text\": \"∑i−1n|Roundi(????)−????\"\n },\n {\n \"code\": null,\n \"e\": 1747,\n \"s\": 1579,\n \"text\": \"So if the input is like [“0.700”, “2.800”, “4.900”], and the target is 8. Use floor or ceil operations to get (0.7 - 0) + (3 - 2.8) + (5 - 4.9) = 0.7 + 0.2 + 0.1 = 1.0\"\n },\n {\n \"code\": null,\n \"e\": 1791,\n \"s\": 1747,\n \"text\": \"To solve this, we will follow these steps −\"\n },\n {\n \"code\": null,\n \"e\": 1800,\n \"s\": 1791,\n \"text\": \"ret := 0\"\n },\n {\n \"code\": null,\n \"e\": 1809,\n \"s\": 1800,\n \"text\": \"ret := 0\"\n },\n {\n \"code\": null,\n \"e\": 1877,\n \"s\": 1809,\n \"text\": \"make one priority queue pq for (double and array) type complex data\"\n },\n {\n \"code\": null,\n \"e\": 1945,\n \"s\": 1877,\n \"text\": \"make one priority queue pq for (double and array) type complex data\"\n },\n {\n \"code\": null,\n \"e\": 2158,\n \"s\": 1945,\n \"text\": \"for i in range 0 to size of pricesx := double value of prices[i]low := floor of xhigh := ceiling of xif low is not highdiff := (high - x) – (x - low)insert diff into pqtarget := target – lowret := ret + (x - low)\"\n },\n {\n \"code\": null,\n \"e\": 2193,\n \"s\": 2158,\n \"text\": \"for i in range 0 to size of prices\"\n },\n {\n \"code\": null,\n \"e\": 2224,\n \"s\": 2193,\n \"text\": \"x := double value of prices[i]\"\n },\n {\n \"code\": null,\n \"e\": 2255,\n \"s\": 2224,\n \"text\": \"x := double value of prices[i]\"\n },\n {\n \"code\": null,\n \"e\": 2273,\n \"s\": 2255,\n \"text\": \"low := floor of x\"\n },\n {\n \"code\": null,\n \"e\": 2291,\n \"s\": 2273,\n \"text\": \"low := floor of x\"\n },\n {\n \"code\": null,\n \"e\": 2312,\n \"s\": 2291,\n \"text\": \"high := ceiling of x\"\n },\n {\n \"code\": null,\n \"e\": 2333,\n \"s\": 2312,\n \"text\": \"high := ceiling of x\"\n },\n {\n \"code\": null,\n \"e\": 2401,\n \"s\": 2333,\n \"text\": \"if low is not highdiff := (high - x) – (x - low)insert diff into pq\"\n },\n {\n \"code\": null,\n \"e\": 2420,\n \"s\": 2401,\n \"text\": \"if low is not high\"\n },\n {\n \"code\": null,\n \"e\": 2451,\n \"s\": 2420,\n \"text\": \"diff := (high - x) – (x - low)\"\n },\n {\n \"code\": null,\n \"e\": 2482,\n \"s\": 2451,\n \"text\": \"diff := (high - x) – (x - low)\"\n },\n {\n \"code\": null,\n \"e\": 2502,\n \"s\": 2482,\n \"text\": \"insert diff into pq\"\n },\n {\n \"code\": null,\n \"e\": 2522,\n \"s\": 2502,\n \"text\": \"insert diff into pq\"\n },\n {\n \"code\": null,\n \"e\": 2545,\n \"s\": 2522,\n \"text\": \"target := target – low\"\n },\n {\n \"code\": null,\n \"e\": 2568,\n \"s\": 2545,\n \"text\": \"target := target – low\"\n },\n {\n \"code\": null,\n \"e\": 2591,\n \"s\": 2568,\n \"text\": \"ret := ret + (x - low)\"\n },\n {\n \"code\": null,\n \"e\": 2614,\n \"s\": 2591,\n \"text\": \"ret := ret + (x - low)\"\n },\n {\n \"code\": null,\n \"e\": 2669,\n \"s\": 2614,\n \"text\": \"if target > size of pq or target < 0, then return “-1”\"\n },\n {\n \"code\": null,\n \"e\": 2724,\n \"s\": 2669,\n \"text\": \"if target > size of pq or target < 0, then return “-1”\"\n },\n {\n \"code\": null,\n \"e\": 2805,\n \"s\": 2724,\n \"text\": \"while target is not 0ret := ret + top of pq, delete from pqddecrease target by 1\"\n },\n {\n \"code\": null,\n \"e\": 2827,\n \"s\": 2805,\n \"text\": \"while target is not 0\"\n },\n {\n \"code\": null,\n \"e\": 2866,\n \"s\": 2827,\n \"text\": \"ret := ret + top of pq, delete from pq\"\n },\n {\n \"code\": null,\n \"e\": 2905,\n \"s\": 2866,\n \"text\": \"ret := ret + top of pq, delete from pq\"\n },\n {\n \"code\": null,\n \"e\": 2907,\n \"s\": 2905,\n \"text\": \"d\"\n },\n {\n \"code\": null,\n \"e\": 2928,\n \"s\": 2907,\n \"text\": \"decrease target by 1\"\n },\n {\n \"code\": null,\n \"e\": 2949,\n \"s\": 2928,\n \"text\": \"decrease target by 1\"\n },\n {\n \"code\": null,\n \"e\": 2968,\n \"s\": 2949,\n \"text\": \"s := ret as string\"\n },\n {\n \"code\": null,\n \"e\": 2987,\n \"s\": 2968,\n \"text\": \"s := ret as string\"\n },\n {\n \"code\": null,\n \"e\": 3050,\n \"s\": 2987,\n \"text\": \"return substring s by taking number up to three decimal places\"\n },\n {\n \"code\": null,\n \"e\": 3113,\n \"s\": 3050,\n \"text\": \"return substring s by taking number up to three decimal places\"\n },\n {\n \"code\": null,\n \"e\": 3183,\n \"s\": 3113,\n \"text\": \"Let us see the following implementation to get better understanding −\"\n },\n {\n \"code\": null,\n \"e\": 3194,\n \"s\": 3183,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 4175,\n \"s\": 3194,\n \"text\": \"#include \\nusing namespace std;\\nstruct Comparator{\\n bool operator()(double a, double b) {\\n return !(a < b);\\n }\\n};\\nclass Solution {\\n public:\\n string minimizeError(vector& prices, int target) {\\n double ret = 0;\\n priority_queue < double, vector < double >, Comparator > pq;\\n for(int i = 0; i < prices.size(); i++){\\n double x = stod(prices[i]);\\n double low = floor(x);\\n double high = ceil(x);\\n if(low != high){\\n double diff = ((high - x) - (x - low));\\n pq.push(diff);\\n }\\n target -= low;\\n ret += (x - low);\\n }\\n if(target > pq.size() || target < 0) return \\\"-1\\\";\\n while(target--){\\n ret += pq.top();\\n pq.pop();\\n }\\n string s = to_string (ret);\\n return s.substr (0, s.find_first_of ('.', 0) + 4);\\n }\\n};\\nmain(){\\n vector v = {\\\"0.700\\\",\\\"2.800\\\",\\\"4.900\\\"};\\n Solution ob;\\n cout << (ob.minimizeError(v, 8));\\n}\"\n },\n {\n \"code\": null,\n \"e\": 4203,\n \"s\": 4175,\n \"text\": \"[\\\"0.700\\\",\\\"2.800\\\",\\\"4.900\\\"]\\n8\"\n },\n {\n \"code\": null,\n \"e\": 4211,\n \"s\": 4203,\n \"text\": \"\\\"1.000\\\"\"\n }\n]"}}},{"rowIdx":526,"cells":{"title":{"kind":"string","value":"Node.js shift() function - GeeksforGeeks"},"text":{"kind":"string","value":"14 Oct, 2021\nshift() is an array function from Node.js that is used to delete element from the front of an array.\nSyntax:\narray_name.shift()\nParameter: This function does not takes any parameter.\nReturn type: The function returns the array after deleting the element.\nThe program below demonstrates the working of the function:\nProgram 1:\nfunction shiftDemo(){ arr.shift(); console.log(arr);}var arr=[17, 55, 87, 49, 78];shiftDemo();\nOutput:\n[ 55, 87, 49, 78 ]\nProgram 2:\nfunction shiftDemo(){ arr.shift(); console.log(arr);}var arr=['a', 'b'];shiftDemo();\nOutput:\n[ 'b' ]\nProgram 3:\nlet Lang = [\"Python\", \"C\", \"Java\", \"JavaScript\"];while ((i = Lang.shift()) !== undefined) { Lang.shift();}console.log(Lang);\nOutput:\n[]\nNodeJS-function\nNode.js\nWeb Technologies\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nComments\nOld Comments\nHow to build a basic CRUD app with Node.js and ReactJS ?\nHow to connect Node.js with React.js ?\nMongoose Populate() Method\nExpress.js req.params Property\nMongoose find() Function\nTop 10 Front End Developer Skills That You Need in 2022\nTop 10 Projects For Beginners To Practice HTML and CSS Skills\nHow to fetch data from an API in ReactJS ?\nHow to insert spaces/tabs in text using HTML/CSS?\nDifference between var, let and const keywords in JavaScript"},"parsed":{"kind":"list like","value":[{"code":null,"e":24531,"s":24503,"text":"\n14 Oct, 2021"},{"code":null,"e":24632,"s":24531,"text":"shift() is an array function from Node.js that is used to delete element from the front of an array."},{"code":null,"e":24640,"s":24632,"text":"Syntax:"},{"code":null,"e":24659,"s":24640,"text":"array_name.shift()"},{"code":null,"e":24714,"s":24659,"text":"Parameter: This function does not takes any parameter."},{"code":null,"e":24786,"s":24714,"text":"Return type: The function returns the array after deleting the element."},{"code":null,"e":24846,"s":24786,"text":"The program below demonstrates the working of the function:"},{"code":null,"e":24857,"s":24846,"text":"Program 1:"},{"code":"function shiftDemo(){ arr.shift(); console.log(arr);}var arr=[17, 55, 87, 49, 78];shiftDemo();","e":24954,"s":24857,"text":null},{"code":null,"e":24962,"s":24954,"text":"Output:"},{"code":null,"e":24981,"s":24962,"text":"[ 55, 87, 49, 78 ]"},{"code":null,"e":24992,"s":24981,"text":"Program 2:"},{"code":"function shiftDemo(){ arr.shift(); console.log(arr);}var arr=['a', 'b'];shiftDemo();","e":25079,"s":24992,"text":null},{"code":null,"e":25087,"s":25079,"text":"Output:"},{"code":null,"e":25095,"s":25087,"text":"[ 'b' ]"},{"code":null,"e":25106,"s":25095,"text":"Program 3:"},{"code":"let Lang = [\"Python\", \"C\", \"Java\", \"JavaScript\"];while ((i = Lang.shift()) !== undefined) { Lang.shift();}console.log(Lang);","e":25234,"s":25106,"text":null},{"code":null,"e":25242,"s":25234,"text":"Output:"},{"code":null,"e":25245,"s":25242,"text":"[]"},{"code":null,"e":25261,"s":25245,"text":"NodeJS-function"},{"code":null,"e":25269,"s":25261,"text":"Node.js"},{"code":null,"e":25286,"s":25269,"text":"Web Technologies"},{"code":null,"e":25384,"s":25286,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":25393,"s":25384,"text":"Comments"},{"code":null,"e":25406,"s":25393,"text":"Old Comments"},{"code":null,"e":25463,"s":25406,"text":"How to build a basic CRUD app with Node.js and ReactJS ?"},{"code":null,"e":25502,"s":25463,"text":"How to connect Node.js with React.js ?"},{"code":null,"e":25529,"s":25502,"text":"Mongoose Populate() Method"},{"code":null,"e":25560,"s":25529,"text":"Express.js req.params Property"},{"code":null,"e":25585,"s":25560,"text":"Mongoose find() Function"},{"code":null,"e":25641,"s":25585,"text":"Top 10 Front End Developer Skills That You Need in 2022"},{"code":null,"e":25703,"s":25641,"text":"Top 10 Projects For Beginners To Practice HTML and CSS Skills"},{"code":null,"e":25746,"s":25703,"text":"How to fetch data from an API in ReactJS ?"},{"code":null,"e":25796,"s":25746,"text":"How to insert spaces/tabs in text using HTML/CSS?"}],"string":"[\n {\n \"code\": null,\n \"e\": 24531,\n \"s\": 24503,\n \"text\": \"\\n14 Oct, 2021\"\n },\n {\n \"code\": null,\n \"e\": 24632,\n \"s\": 24531,\n \"text\": \"shift() is an array function from Node.js that is used to delete element from the front of an array.\"\n },\n {\n \"code\": null,\n \"e\": 24640,\n \"s\": 24632,\n \"text\": \"Syntax:\"\n },\n {\n \"code\": null,\n \"e\": 24659,\n \"s\": 24640,\n \"text\": \"array_name.shift()\"\n },\n {\n \"code\": null,\n \"e\": 24714,\n \"s\": 24659,\n \"text\": \"Parameter: This function does not takes any parameter.\"\n },\n {\n \"code\": null,\n \"e\": 24786,\n \"s\": 24714,\n \"text\": \"Return type: The function returns the array after deleting the element.\"\n },\n {\n \"code\": null,\n \"e\": 24846,\n \"s\": 24786,\n \"text\": \"The program below demonstrates the working of the function:\"\n },\n {\n \"code\": null,\n \"e\": 24857,\n \"s\": 24846,\n \"text\": \"Program 1:\"\n },\n {\n \"code\": \"function shiftDemo(){ arr.shift(); console.log(arr);}var arr=[17, 55, 87, 49, 78];shiftDemo();\",\n \"e\": 24954,\n \"s\": 24857,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 24962,\n \"s\": 24954,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 24981,\n \"s\": 24962,\n \"text\": \"[ 55, 87, 49, 78 ]\"\n },\n {\n \"code\": null,\n \"e\": 24992,\n \"s\": 24981,\n \"text\": \"Program 2:\"\n },\n {\n \"code\": \"function shiftDemo(){ arr.shift(); console.log(arr);}var arr=['a', 'b'];shiftDemo();\",\n \"e\": 25079,\n \"s\": 24992,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25087,\n \"s\": 25079,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25095,\n \"s\": 25087,\n \"text\": \"[ 'b' ]\"\n },\n {\n \"code\": null,\n \"e\": 25106,\n \"s\": 25095,\n \"text\": \"Program 3:\"\n },\n {\n \"code\": \"let Lang = [\\\"Python\\\", \\\"C\\\", \\\"Java\\\", \\\"JavaScript\\\"];while ((i = Lang.shift()) !== undefined) { Lang.shift();}console.log(Lang);\",\n \"e\": 25234,\n \"s\": 25106,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25242,\n \"s\": 25234,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25245,\n \"s\": 25242,\n \"text\": \"[]\"\n },\n {\n \"code\": null,\n \"e\": 25261,\n \"s\": 25245,\n \"text\": \"NodeJS-function\"\n },\n {\n \"code\": null,\n \"e\": 25269,\n \"s\": 25261,\n \"text\": \"Node.js\"\n },\n {\n \"code\": null,\n \"e\": 25286,\n \"s\": 25269,\n \"text\": \"Web Technologies\"\n },\n {\n \"code\": null,\n \"e\": 25384,\n \"s\": 25286,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 25393,\n \"s\": 25384,\n \"text\": \"Comments\"\n },\n {\n \"code\": null,\n \"e\": 25406,\n \"s\": 25393,\n \"text\": \"Old Comments\"\n },\n {\n \"code\": null,\n \"e\": 25463,\n \"s\": 25406,\n \"text\": \"How to build a basic CRUD app with Node.js and ReactJS ?\"\n },\n {\n \"code\": null,\n \"e\": 25502,\n \"s\": 25463,\n \"text\": \"How to connect Node.js with React.js ?\"\n },\n {\n \"code\": null,\n \"e\": 25529,\n \"s\": 25502,\n \"text\": \"Mongoose Populate() Method\"\n },\n {\n \"code\": null,\n \"e\": 25560,\n \"s\": 25529,\n \"text\": \"Express.js req.params Property\"\n },\n {\n \"code\": null,\n \"e\": 25585,\n \"s\": 25560,\n \"text\": \"Mongoose find() Function\"\n },\n {\n \"code\": null,\n \"e\": 25641,\n \"s\": 25585,\n \"text\": \"Top 10 Front End Developer Skills That You Need in 2022\"\n },\n {\n \"code\": null,\n \"e\": 25703,\n \"s\": 25641,\n \"text\": \"Top 10 Projects For Beginners To Practice HTML and CSS Skills\"\n },\n {\n \"code\": null,\n \"e\": 25746,\n \"s\": 25703,\n \"text\": \"How to fetch data from an API in ReactJS ?\"\n },\n {\n \"code\": null,\n \"e\": 25796,\n \"s\": 25746,\n \"text\": \"How to insert spaces/tabs in text using HTML/CSS?\"\n }\n]"}}},{"rowIdx":527,"cells":{"title":{"kind":"string","value":"Operator Precedence and Associativity in C"},"text":{"kind":"string","value":"Operator precedence determines the grouping of terms in an expression and decides how an expression is evaluated. Certain operators have higher precedence than others; for example, the multiplication operator has a higher precedence than the addition operator.\nFor example, x = 7 + 3 * 2; here, x is assigned 13, not 20 because operator * has a higher precedence than +, so it first gets multiplied with 3*2 and then adds into 7.\nHere, operators with the highest precedence appear at the top of the table, those with the lowest appear at the bottom. Within an expression, higher precedence operators will be evaluated first.\n#include \nmain() {\n int a = 20;\n int b = 10;\n int c = 15;\n int d = 5;\n int e;\n e = (a + b) * c / d; // ( 30 * 15 ) / 5\n printf(\"Value of (a + b) * c / d is : %d\\n\", e );\n e = ((a + b) * c) / d; // (30 * 15 ) / 5\n printf(\"Value of ((a + b) * c) / d is : %d\\n\" , e );\n e = (a + b) * (c / d); // (30) * (15/5)\n printf(\"Value of (a + b) * (c / d) is : %d\\n\", e );\n e = a + (b * c) / d; // 20 + (150/5)\n printf(\"Value of a + (b * c) / d is : %d\\n\" , e );\n return 0;\n}\nValue of (a + b) * c / d is : 90\nValue of ((a + b) * c) / d is : 90\nValue of (a + b) * (c / d) is : 90\nValue of a + (b * c) / d is : 50"},"parsed":{"kind":"list like","value":[{"code":null,"e":1323,"s":1062,"text":"Operator precedence determines the grouping of terms in an expression and decides how an expression is evaluated. Certain operators have higher precedence than others; for example, the multiplication operator has a higher precedence than the addition operator."},{"code":null,"e":1492,"s":1323,"text":"For example, x = 7 + 3 * 2; here, x is assigned 13, not 20 because operator * has a higher precedence than +, so it first gets multiplied with 3*2 and then adds into 7."},{"code":null,"e":1687,"s":1492,"text":"Here, operators with the highest precedence appear at the top of the table, those with the lowest appear at the bottom. Within an expression, higher precedence operators will be evaluated first."},{"code":null,"e":2187,"s":1687,"text":"#include \nmain() {\n int a = 20;\n int b = 10;\n int c = 15;\n int d = 5;\n int e;\n e = (a + b) * c / d; // ( 30 * 15 ) / 5\n printf(\"Value of (a + b) * c / d is : %d\\n\", e );\n e = ((a + b) * c) / d; // (30 * 15 ) / 5\n printf(\"Value of ((a + b) * c) / d is : %d\\n\" , e );\n e = (a + b) * (c / d); // (30) * (15/5)\n printf(\"Value of (a + b) * (c / d) is : %d\\n\", e );\n e = a + (b * c) / d; // 20 + (150/5)\n printf(\"Value of a + (b * c) / d is : %d\\n\" , e );\n return 0;\n}"},{"code":null,"e":2323,"s":2187,"text":"Value of (a + b) * c / d is : 90\nValue of ((a + b) * c) / d is : 90\nValue of (a + b) * (c / d) is : 90\nValue of a + (b * c) / d is : 50"}],"string":"[\n {\n \"code\": null,\n \"e\": 1323,\n \"s\": 1062,\n \"text\": \"Operator precedence determines the grouping of terms in an expression and decides how an expression is evaluated. Certain operators have higher precedence than others; for example, the multiplication operator has a higher precedence than the addition operator.\"\n },\n {\n \"code\": null,\n \"e\": 1492,\n \"s\": 1323,\n \"text\": \"For example, x = 7 + 3 * 2; here, x is assigned 13, not 20 because operator * has a higher precedence than +, so it first gets multiplied with 3*2 and then adds into 7.\"\n },\n {\n \"code\": null,\n \"e\": 1687,\n \"s\": 1492,\n \"text\": \"Here, operators with the highest precedence appear at the top of the table, those with the lowest appear at the bottom. Within an expression, higher precedence operators will be evaluated first.\"\n },\n {\n \"code\": null,\n \"e\": 2187,\n \"s\": 1687,\n \"text\": \"#include \\nmain() {\\n int a = 20;\\n int b = 10;\\n int c = 15;\\n int d = 5;\\n int e;\\n e = (a + b) * c / d; // ( 30 * 15 ) / 5\\n printf(\\\"Value of (a + b) * c / d is : %d\\\\n\\\", e );\\n e = ((a + b) * c) / d; // (30 * 15 ) / 5\\n printf(\\\"Value of ((a + b) * c) / d is : %d\\\\n\\\" , e );\\n e = (a + b) * (c / d); // (30) * (15/5)\\n printf(\\\"Value of (a + b) * (c / d) is : %d\\\\n\\\", e );\\n e = a + (b * c) / d; // 20 + (150/5)\\n printf(\\\"Value of a + (b * c) / d is : %d\\\\n\\\" , e );\\n return 0;\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2323,\n \"s\": 2187,\n \"text\": \"Value of (a + b) * c / d is : 90\\nValue of ((a + b) * c) / d is : 90\\nValue of (a + b) * (c / d) is : 90\\nValue of a + (b * c) / d is : 50\"\n }\n]"}}},{"rowIdx":528,"cells":{"title":{"kind":"string","value":"Apache Presto - MySQL Connector"},"text":{"kind":"string","value":"The MySQL connector is used to query an external MySQL database.\nMySQL server installation.\nHopefully you have installed mysql server on your machine. To enable mysql properties on Presto server, you must create a file “mysql.properties” in “etc/catalog” directory. Issue the following command to create a mysql.properties file.\n$ cd etc \n$ cd catalog \n$ vi mysql.properties \n\nconnector.name = mysql \nconnection-url = jdbc:mysql://localhost:3306 \nconnection-user = root \nconnection-password = pwd \n\nSave the file and quit the terminal. In the above file, you must enter your mysql password in connection-password field.\nOpen MySQL server and create a database using the following command.\ncreate database tutorials\nNow you have created “tutorials” database in the server. To enable database type, use the command “use tutorials” in the query window.\nLet’s create a simple table on “tutorials” database.\ncreate table author(auth_id int not null, auth_name varchar(50),topic varchar(100))\nAfter creating a table, insert three records using the following query.\ninsert into author values(1,'Doug Cutting','Hadoop') \ninsert into author values(2,’James Gosling','java') \ninsert into author values(3,'Dennis Ritchie’,'C')\nTo retrieve all the records, type the following query.\nselect * from author\nauth_id auth_name topic \n1 Doug Cutting Hadoop \n2 James Gosling java \n3 Dennis Ritchie C \n\nAs of now, you have queried data using MySQL server. Let’s connect Mysql storage plugin to Presto server.\nType the following command to connect MySql plugin on Presto CLI.\n./presto --server localhost:8080 --catalog mysql --schema tutorials \n\nYou will receive the following response.\npresto:tutorials> \n\nHere “tutorials” refers to schema in mysql server.\nTo list out all the schemas in mysql, type the following query in Presto server.\npresto:tutorials> show schemas from mysql;\n Schema \n-------------------- \n information_schema \n performance_schema \n sys \n tutorials\n\nFrom this result, we can conclude the first three schemas as predefined and the last one as created by yourself.\nFollowing query lists out all the tables in tutorials schema.\npresto:tutorials> show tables from mysql.tutorials; \n Table \n-------- \n author\n\nWe have created only one table in this schema. If you have created multiple tables, it will list out all the tables.\nTo describe the table fields, type the following query.\npresto:tutorials> describe mysql.tutorials.author;\n Column | Type | Comment \n-----------+--------------+--------- \n auth_id | integer | \n auth_name | varchar(50) | \n topic | varchar(100) |\n\npresto:tutorials> show columns from mysql.tutorials.author; \n Column | Type | Comment \n-----------+--------------+--------- \n auth_id | integer | \n auth_name | varchar(50) | \n topic | varchar(100) |\n\nTo fetch all the records from mysql table, issue the following query.\npresto:tutorials> select * from mysql.tutorials.author; \nauth_id | auth_name | topic \n---------+----------------+-------- \n 1 | Doug Cutting | Hadoop \n 2 | James Gosling | java \n 3 | Dennis Ritchie | C\n\nFrom this result, you can retrieve mysql server records in Presto.\nMysql connector doesn’t support create table query but you can create a table using as command.\npresto:tutorials> create table mysql.tutorials.sample as \nselect * from mysql.tutorials.author; \nCREATE TABLE: 3 rows\n\nYou can’t insert rows directly because this connector has some limitations. It cannot support the following queries −\ncreate\ninsert\nupdate\ndelete\ndrop\nTo view the records in the newly created table, type the following query.\npresto:tutorials> select * from mysql.tutorials.sample; \nauth_id | auth_name | topic \n---------+----------------+-------- \n 1 | Doug Cutting | Hadoop \n 2 | James Gosling | java \n 3 | Dennis Ritchie | C\n\n\n 46 Lectures \n 3.5 hours \n\n Arnab Chakraborty\n\n 23 Lectures \n 1.5 hours \n\n Mukund Kumar Mishra\n\n 16 Lectures \n 1 hours \n\n Nilay Mehta\n\n 52 Lectures \n 1.5 hours \n\n Bigdata Engineer\n\n 14 Lectures \n 1 hours \n\n Bigdata Engineer\n\n 23 Lectures \n 1 hours \n\n Bigdata Engineer\n Print\n Add Notes\n Bookmark this page"},"parsed":{"kind":"list like","value":[{"code":null,"e":2071,"s":2006,"text":"The MySQL connector is used to query an external MySQL database."},{"code":null,"e":2098,"s":2071,"text":"MySQL server installation."},{"code":null,"e":2335,"s":2098,"text":"Hopefully you have installed mysql server on your machine. To enable mysql properties on Presto server, you must create a file “mysql.properties” in “etc/catalog” directory. Issue the following command to create a mysql.properties file."},{"code":null,"e":2507,"s":2335,"text":"$ cd etc \n$ cd catalog \n$ vi mysql.properties \n\nconnector.name = mysql \nconnection-url = jdbc:mysql://localhost:3306 \nconnection-user = root \nconnection-password = pwd \n"},{"code":null,"e":2628,"s":2507,"text":"Save the file and quit the terminal. In the above file, you must enter your mysql password in connection-password field."},{"code":null,"e":2697,"s":2628,"text":"Open MySQL server and create a database using the following command."},{"code":null,"e":2723,"s":2697,"text":"create database tutorials"},{"code":null,"e":2858,"s":2723,"text":"Now you have created “tutorials” database in the server. To enable database type, use the command “use tutorials” in the query window."},{"code":null,"e":2911,"s":2858,"text":"Let’s create a simple table on “tutorials” database."},{"code":null,"e":2995,"s":2911,"text":"create table author(auth_id int not null, auth_name varchar(50),topic varchar(100))"},{"code":null,"e":3067,"s":2995,"text":"After creating a table, insert three records using the following query."},{"code":null,"e":3224,"s":3067,"text":"insert into author values(1,'Doug Cutting','Hadoop') \ninsert into author values(2,’James Gosling','java') \ninsert into author values(3,'Dennis Ritchie’,'C')"},{"code":null,"e":3279,"s":3224,"text":"To retrieve all the records, type the following query."},{"code":null,"e":3300,"s":3279,"text":"select * from author"},{"code":null,"e":3432,"s":3300,"text":"auth_id auth_name topic \n1 Doug Cutting Hadoop \n2 James Gosling java \n3 Dennis Ritchie C \n"},{"code":null,"e":3538,"s":3432,"text":"As of now, you have queried data using MySQL server. Let’s connect Mysql storage plugin to Presto server."},{"code":null,"e":3604,"s":3538,"text":"Type the following command to connect MySql plugin on Presto CLI."},{"code":null,"e":3674,"s":3604,"text":"./presto --server localhost:8080 --catalog mysql --schema tutorials \n"},{"code":null,"e":3715,"s":3674,"text":"You will receive the following response."},{"code":null,"e":3735,"s":3715,"text":"presto:tutorials> \n"},{"code":null,"e":3786,"s":3735,"text":"Here “tutorials” refers to schema in mysql server."},{"code":null,"e":3867,"s":3786,"text":"To list out all the schemas in mysql, type the following query in Presto server."},{"code":null,"e":3910,"s":3867,"text":"presto:tutorials> show schemas from mysql;"},{"code":null,"e":4006,"s":3910,"text":" Schema \n-------------------- \n information_schema \n performance_schema \n sys \n tutorials\n"},{"code":null,"e":4119,"s":4006,"text":"From this result, we can conclude the first three schemas as predefined and the last one as created by yourself."},{"code":null,"e":4181,"s":4119,"text":"Following query lists out all the tables in tutorials schema."},{"code":null,"e":4234,"s":4181,"text":"presto:tutorials> show tables from mysql.tutorials; "},{"code":null,"e":4262,"s":4234,"text":" Table \n-------- \n author\n"},{"code":null,"e":4379,"s":4262,"text":"We have created only one table in this schema. If you have created multiple tables, it will list out all the tables."},{"code":null,"e":4435,"s":4379,"text":"To describe the table fields, type the following query."},{"code":null,"e":4486,"s":4435,"text":"presto:tutorials> describe mysql.tutorials.author;"},{"code":null,"e":4648,"s":4486,"text":" Column | Type | Comment \n-----------+--------------+--------- \n auth_id | integer | \n auth_name | varchar(50) | \n topic | varchar(100) |\n"},{"code":null,"e":4709,"s":4648,"text":"presto:tutorials> show columns from mysql.tutorials.author; "},{"code":null,"e":4871,"s":4709,"text":" Column | Type | Comment \n-----------+--------------+--------- \n auth_id | integer | \n auth_name | varchar(50) | \n topic | varchar(100) |\n"},{"code":null,"e":4941,"s":4871,"text":"To fetch all the records from mysql table, issue the following query."},{"code":null,"e":4998,"s":4941,"text":"presto:tutorials> select * from mysql.tutorials.author; "},{"code":null,"e":5171,"s":4998,"text":"auth_id | auth_name | topic \n---------+----------------+-------- \n 1 | Doug Cutting | Hadoop \n 2 | James Gosling | java \n 3 | Dennis Ritchie | C\n"},{"code":null,"e":5238,"s":5171,"text":"From this result, you can retrieve mysql server records in Presto."},{"code":null,"e":5334,"s":5238,"text":"Mysql connector doesn’t support create table query but you can create a table using as command."},{"code":null,"e":5431,"s":5334,"text":"presto:tutorials> create table mysql.tutorials.sample as \nselect * from mysql.tutorials.author; "},{"code":null,"e":5453,"s":5431,"text":"CREATE TABLE: 3 rows\n"},{"code":null,"e":5571,"s":5453,"text":"You can’t insert rows directly because this connector has some limitations. It cannot support the following queries −"},{"code":null,"e":5578,"s":5571,"text":"create"},{"code":null,"e":5585,"s":5578,"text":"insert"},{"code":null,"e":5592,"s":5585,"text":"update"},{"code":null,"e":5599,"s":5592,"text":"delete"},{"code":null,"e":5604,"s":5599,"text":"drop"},{"code":null,"e":5678,"s":5604,"text":"To view the records in the newly created table, type the following query."},{"code":null,"e":5735,"s":5678,"text":"presto:tutorials> select * from mysql.tutorials.sample; "},{"code":null,"e":5908,"s":5735,"text":"auth_id | auth_name | topic \n---------+----------------+-------- \n 1 | Doug Cutting | Hadoop \n 2 | James Gosling | java \n 3 | Dennis Ritchie | C\n"},{"code":null,"e":5943,"s":5908,"text":"\n 46 Lectures \n 3.5 hours \n"},{"code":null,"e":5962,"s":5943,"text":" Arnab Chakraborty"},{"code":null,"e":5997,"s":5962,"text":"\n 23 Lectures \n 1.5 hours \n"},{"code":null,"e":6018,"s":5997,"text":" Mukund Kumar Mishra"},{"code":null,"e":6051,"s":6018,"text":"\n 16 Lectures \n 1 hours \n"},{"code":null,"e":6064,"s":6051,"text":" Nilay Mehta"},{"code":null,"e":6099,"s":6064,"text":"\n 52 Lectures \n 1.5 hours \n"},{"code":null,"e":6117,"s":6099,"text":" Bigdata Engineer"},{"code":null,"e":6150,"s":6117,"text":"\n 14 Lectures \n 1 hours \n"},{"code":null,"e":6168,"s":6150,"text":" Bigdata Engineer"},{"code":null,"e":6201,"s":6168,"text":"\n 23 Lectures \n 1 hours \n"},{"code":null,"e":6219,"s":6201,"text":" Bigdata Engineer"},{"code":null,"e":6226,"s":6219,"text":" Print"},{"code":null,"e":6237,"s":6226,"text":" Add Notes"}],"string":"[\n {\n \"code\": null,\n \"e\": 2071,\n \"s\": 2006,\n \"text\": \"The MySQL connector is used to query an external MySQL database.\"\n },\n {\n \"code\": null,\n \"e\": 2098,\n \"s\": 2071,\n \"text\": \"MySQL server installation.\"\n },\n {\n \"code\": null,\n \"e\": 2335,\n \"s\": 2098,\n \"text\": \"Hopefully you have installed mysql server on your machine. To enable mysql properties on Presto server, you must create a file “mysql.properties” in “etc/catalog” directory. Issue the following command to create a mysql.properties file.\"\n },\n {\n \"code\": null,\n \"e\": 2507,\n \"s\": 2335,\n \"text\": \"$ cd etc \\n$ cd catalog \\n$ vi mysql.properties \\n\\nconnector.name = mysql \\nconnection-url = jdbc:mysql://localhost:3306 \\nconnection-user = root \\nconnection-password = pwd \\n\"\n },\n {\n \"code\": null,\n \"e\": 2628,\n \"s\": 2507,\n \"text\": \"Save the file and quit the terminal. In the above file, you must enter your mysql password in connection-password field.\"\n },\n {\n \"code\": null,\n \"e\": 2697,\n \"s\": 2628,\n \"text\": \"Open MySQL server and create a database using the following command.\"\n },\n {\n \"code\": null,\n \"e\": 2723,\n \"s\": 2697,\n \"text\": \"create database tutorials\"\n },\n {\n \"code\": null,\n \"e\": 2858,\n \"s\": 2723,\n \"text\": \"Now you have created “tutorials” database in the server. To enable database type, use the command “use tutorials” in the query window.\"\n },\n {\n \"code\": null,\n \"e\": 2911,\n \"s\": 2858,\n \"text\": \"Let’s create a simple table on “tutorials” database.\"\n },\n {\n \"code\": null,\n \"e\": 2995,\n \"s\": 2911,\n \"text\": \"create table author(auth_id int not null, auth_name varchar(50),topic varchar(100))\"\n },\n {\n \"code\": null,\n \"e\": 3067,\n \"s\": 2995,\n \"text\": \"After creating a table, insert three records using the following query.\"\n },\n {\n \"code\": null,\n \"e\": 3224,\n \"s\": 3067,\n \"text\": \"insert into author values(1,'Doug Cutting','Hadoop') \\ninsert into author values(2,’James Gosling','java') \\ninsert into author values(3,'Dennis Ritchie’,'C')\"\n },\n {\n \"code\": null,\n \"e\": 3279,\n \"s\": 3224,\n \"text\": \"To retrieve all the records, type the following query.\"\n },\n {\n \"code\": null,\n \"e\": 3300,\n \"s\": 3279,\n \"text\": \"select * from author\"\n },\n {\n \"code\": null,\n \"e\": 3432,\n \"s\": 3300,\n \"text\": \"auth_id auth_name topic \\n1 Doug Cutting Hadoop \\n2 James Gosling java \\n3 Dennis Ritchie C \\n\"\n },\n {\n \"code\": null,\n \"e\": 3538,\n \"s\": 3432,\n \"text\": \"As of now, you have queried data using MySQL server. Let’s connect Mysql storage plugin to Presto server.\"\n },\n {\n \"code\": null,\n \"e\": 3604,\n \"s\": 3538,\n \"text\": \"Type the following command to connect MySql plugin on Presto CLI.\"\n },\n {\n \"code\": null,\n \"e\": 3674,\n \"s\": 3604,\n \"text\": \"./presto --server localhost:8080 --catalog mysql --schema tutorials \\n\"\n },\n {\n \"code\": null,\n \"e\": 3715,\n \"s\": 3674,\n \"text\": \"You will receive the following response.\"\n },\n {\n \"code\": null,\n \"e\": 3735,\n \"s\": 3715,\n \"text\": \"presto:tutorials> \\n\"\n },\n {\n \"code\": null,\n \"e\": 3786,\n \"s\": 3735,\n \"text\": \"Here “tutorials” refers to schema in mysql server.\"\n },\n {\n \"code\": null,\n \"e\": 3867,\n \"s\": 3786,\n \"text\": \"To list out all the schemas in mysql, type the following query in Presto server.\"\n },\n {\n \"code\": null,\n \"e\": 3910,\n \"s\": 3867,\n \"text\": \"presto:tutorials> show schemas from mysql;\"\n },\n {\n \"code\": null,\n \"e\": 4006,\n \"s\": 3910,\n \"text\": \" Schema \\n-------------------- \\n information_schema \\n performance_schema \\n sys \\n tutorials\\n\"\n },\n {\n \"code\": null,\n \"e\": 4119,\n \"s\": 4006,\n \"text\": \"From this result, we can conclude the first three schemas as predefined and the last one as created by yourself.\"\n },\n {\n \"code\": null,\n \"e\": 4181,\n \"s\": 4119,\n \"text\": \"Following query lists out all the tables in tutorials schema.\"\n },\n {\n \"code\": null,\n \"e\": 4234,\n \"s\": 4181,\n \"text\": \"presto:tutorials> show tables from mysql.tutorials; \"\n },\n {\n \"code\": null,\n \"e\": 4262,\n \"s\": 4234,\n \"text\": \" Table \\n-------- \\n author\\n\"\n },\n {\n \"code\": null,\n \"e\": 4379,\n \"s\": 4262,\n \"text\": \"We have created only one table in this schema. If you have created multiple tables, it will list out all the tables.\"\n },\n {\n \"code\": null,\n \"e\": 4435,\n \"s\": 4379,\n \"text\": \"To describe the table fields, type the following query.\"\n },\n {\n \"code\": null,\n \"e\": 4486,\n \"s\": 4435,\n \"text\": \"presto:tutorials> describe mysql.tutorials.author;\"\n },\n {\n \"code\": null,\n \"e\": 4648,\n \"s\": 4486,\n \"text\": \" Column | Type | Comment \\n-----------+--------------+--------- \\n auth_id | integer | \\n auth_name | varchar(50) | \\n topic | varchar(100) |\\n\"\n },\n {\n \"code\": null,\n \"e\": 4709,\n \"s\": 4648,\n \"text\": \"presto:tutorials> show columns from mysql.tutorials.author; \"\n },\n {\n \"code\": null,\n \"e\": 4871,\n \"s\": 4709,\n \"text\": \" Column | Type | Comment \\n-----------+--------------+--------- \\n auth_id | integer | \\n auth_name | varchar(50) | \\n topic | varchar(100) |\\n\"\n },\n {\n \"code\": null,\n \"e\": 4941,\n \"s\": 4871,\n \"text\": \"To fetch all the records from mysql table, issue the following query.\"\n },\n {\n \"code\": null,\n \"e\": 4998,\n \"s\": 4941,\n \"text\": \"presto:tutorials> select * from mysql.tutorials.author; \"\n },\n {\n \"code\": null,\n \"e\": 5171,\n \"s\": 4998,\n \"text\": \"auth_id | auth_name | topic \\n---------+----------------+-------- \\n 1 | Doug Cutting | Hadoop \\n 2 | James Gosling | java \\n 3 | Dennis Ritchie | C\\n\"\n },\n {\n \"code\": null,\n \"e\": 5238,\n \"s\": 5171,\n \"text\": \"From this result, you can retrieve mysql server records in Presto.\"\n },\n {\n \"code\": null,\n \"e\": 5334,\n \"s\": 5238,\n \"text\": \"Mysql connector doesn’t support create table query but you can create a table using as command.\"\n },\n {\n \"code\": null,\n \"e\": 5431,\n \"s\": 5334,\n \"text\": \"presto:tutorials> create table mysql.tutorials.sample as \\nselect * from mysql.tutorials.author; \"\n },\n {\n \"code\": null,\n \"e\": 5453,\n \"s\": 5431,\n \"text\": \"CREATE TABLE: 3 rows\\n\"\n },\n {\n \"code\": null,\n \"e\": 5571,\n \"s\": 5453,\n \"text\": \"You can’t insert rows directly because this connector has some limitations. It cannot support the following queries −\"\n },\n {\n \"code\": null,\n \"e\": 5578,\n \"s\": 5571,\n \"text\": \"create\"\n },\n {\n \"code\": null,\n \"e\": 5585,\n \"s\": 5578,\n \"text\": \"insert\"\n },\n {\n \"code\": null,\n \"e\": 5592,\n \"s\": 5585,\n \"text\": \"update\"\n },\n {\n \"code\": null,\n \"e\": 5599,\n \"s\": 5592,\n \"text\": \"delete\"\n },\n {\n \"code\": null,\n \"e\": 5604,\n \"s\": 5599,\n \"text\": \"drop\"\n },\n {\n \"code\": null,\n \"e\": 5678,\n \"s\": 5604,\n \"text\": \"To view the records in the newly created table, type the following query.\"\n },\n {\n \"code\": null,\n \"e\": 5735,\n \"s\": 5678,\n \"text\": \"presto:tutorials> select * from mysql.tutorials.sample; \"\n },\n {\n \"code\": null,\n \"e\": 5908,\n \"s\": 5735,\n \"text\": \"auth_id | auth_name | topic \\n---------+----------------+-------- \\n 1 | Doug Cutting | Hadoop \\n 2 | James Gosling | java \\n 3 | Dennis Ritchie | C\\n\"\n },\n {\n \"code\": null,\n \"e\": 5943,\n \"s\": 5908,\n \"text\": \"\\n 46 Lectures \\n 3.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 5962,\n \"s\": 5943,\n \"text\": \" Arnab Chakraborty\"\n },\n {\n \"code\": null,\n \"e\": 5997,\n \"s\": 5962,\n \"text\": \"\\n 23 Lectures \\n 1.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 6018,\n \"s\": 5997,\n \"text\": \" Mukund Kumar Mishra\"\n },\n {\n \"code\": null,\n \"e\": 6051,\n \"s\": 6018,\n \"text\": \"\\n 16 Lectures \\n 1 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 6064,\n \"s\": 6051,\n \"text\": \" Nilay Mehta\"\n },\n {\n \"code\": null,\n \"e\": 6099,\n \"s\": 6064,\n \"text\": \"\\n 52 Lectures \\n 1.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 6117,\n \"s\": 6099,\n \"text\": \" Bigdata Engineer\"\n },\n {\n \"code\": null,\n \"e\": 6150,\n \"s\": 6117,\n \"text\": \"\\n 14 Lectures \\n 1 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 6168,\n \"s\": 6150,\n \"text\": \" Bigdata Engineer\"\n },\n {\n \"code\": null,\n \"e\": 6201,\n \"s\": 6168,\n \"text\": \"\\n 23 Lectures \\n 1 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 6219,\n \"s\": 6201,\n \"text\": \" Bigdata Engineer\"\n },\n {\n \"code\": null,\n \"e\": 6226,\n \"s\": 6219,\n \"text\": \" Print\"\n },\n {\n \"code\": null,\n \"e\": 6237,\n \"s\": 6226,\n \"text\": \" Add Notes\"\n }\n]"}}},{"rowIdx":529,"cells":{"title":{"kind":"string","value":"C# Program to Get the Machine Name or Host Name Using Environment Class - GeeksforGeeks"},"text":{"kind":"string","value":"30 Nov, 2021\nEnvironment Class provides information about the current platform and manipulates, the current platform. It is useful for getting and setting various operating system-related information. We can use it in such a way that retrieves command-line arguments information, exit codes information, environment variable settings information, contents of the call stack information and time since last system boot in milliseconds information. By just using the predefined MachineName Property we can get the machine name or the hostname using the Environment class. This property is used to find the NetBIOS name of the computer. It also throws InvalidOperationException when this property does not get the name of the computer.\nSyntax:\nEnvironment.MachineName\nReturn: This method returns a string that contains the machine name.\nExample:\nC#\n// C# program to find the name of the machine// Using Environment classusing System; class GFG{ static public void Main(){ // Here we get the machine name // Using the MachineName property // of the Environment class Console.WriteLine(\"Machine Name is\" + Environment.MachineName);}}\nOutput:\nMachine Name is Check\navtarkumar719\nCSharp-programs\nPicked\nC#\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nDestructors in C#\nExtension Method in C#\nHashSet in C# with Examples\nTop 50 C# Interview Questions & Answers\nC# | How to insert an element in an Array?\nPartial Classes in C#\nC# | Inheritance\nC# | List Class\nDifference between Hashtable and Dictionary in C#\nLambda Expressions in C#"},"parsed":{"kind":"list like","value":[{"code":null,"e":24302,"s":24274,"text":"\n30 Nov, 2021"},{"code":null,"e":25022,"s":24302,"text":"Environment Class provides information about the current platform and manipulates, the current platform. It is useful for getting and setting various operating system-related information. We can use it in such a way that retrieves command-line arguments information, exit codes information, environment variable settings information, contents of the call stack information and time since last system boot in milliseconds information. By just using the predefined MachineName Property we can get the machine name or the hostname using the Environment class. This property is used to find the NetBIOS name of the computer. It also throws InvalidOperationException when this property does not get the name of the computer."},{"code":null,"e":25030,"s":25022,"text":"Syntax:"},{"code":null,"e":25054,"s":25030,"text":"Environment.MachineName"},{"code":null,"e":25123,"s":25054,"text":"Return: This method returns a string that contains the machine name."},{"code":null,"e":25132,"s":25123,"text":"Example:"},{"code":null,"e":25135,"s":25132,"text":"C#"},{"code":"// C# program to find the name of the machine// Using Environment classusing System; class GFG{ static public void Main(){ // Here we get the machine name // Using the MachineName property // of the Environment class Console.WriteLine(\"Machine Name is\" + Environment.MachineName);}}","e":25435,"s":25135,"text":null},{"code":null,"e":25443,"s":25435,"text":"Output:"},{"code":null,"e":25465,"s":25443,"text":"Machine Name is Check"},{"code":null,"e":25479,"s":25465,"text":"avtarkumar719"},{"code":null,"e":25495,"s":25479,"text":"CSharp-programs"},{"code":null,"e":25502,"s":25495,"text":"Picked"},{"code":null,"e":25505,"s":25502,"text":"C#"},{"code":null,"e":25603,"s":25505,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":25621,"s":25603,"text":"Destructors in C#"},{"code":null,"e":25644,"s":25621,"text":"Extension Method in C#"},{"code":null,"e":25672,"s":25644,"text":"HashSet in C# with Examples"},{"code":null,"e":25712,"s":25672,"text":"Top 50 C# Interview Questions & Answers"},{"code":null,"e":25755,"s":25712,"text":"C# | How to insert an element in an Array?"},{"code":null,"e":25777,"s":25755,"text":"Partial Classes in C#"},{"code":null,"e":25794,"s":25777,"text":"C# | Inheritance"},{"code":null,"e":25810,"s":25794,"text":"C# | List Class"},{"code":null,"e":25860,"s":25810,"text":"Difference between Hashtable and Dictionary in C#"}],"string":"[\n {\n \"code\": null,\n \"e\": 24302,\n \"s\": 24274,\n \"text\": \"\\n30 Nov, 2021\"\n },\n {\n \"code\": null,\n \"e\": 25022,\n \"s\": 24302,\n \"text\": \"Environment Class provides information about the current platform and manipulates, the current platform. It is useful for getting and setting various operating system-related information. We can use it in such a way that retrieves command-line arguments information, exit codes information, environment variable settings information, contents of the call stack information and time since last system boot in milliseconds information. By just using the predefined MachineName Property we can get the machine name or the hostname using the Environment class. This property is used to find the NetBIOS name of the computer. It also throws InvalidOperationException when this property does not get the name of the computer.\"\n },\n {\n \"code\": null,\n \"e\": 25030,\n \"s\": 25022,\n \"text\": \"Syntax:\"\n },\n {\n \"code\": null,\n \"e\": 25054,\n \"s\": 25030,\n \"text\": \"Environment.MachineName\"\n },\n {\n \"code\": null,\n \"e\": 25123,\n \"s\": 25054,\n \"text\": \"Return: This method returns a string that contains the machine name.\"\n },\n {\n \"code\": null,\n \"e\": 25132,\n \"s\": 25123,\n \"text\": \"Example:\"\n },\n {\n \"code\": null,\n \"e\": 25135,\n \"s\": 25132,\n \"text\": \"C#\"\n },\n {\n \"code\": \"// C# program to find the name of the machine// Using Environment classusing System; class GFG{ static public void Main(){ // Here we get the machine name // Using the MachineName property // of the Environment class Console.WriteLine(\\\"Machine Name is\\\" + Environment.MachineName);}}\",\n \"e\": 25435,\n \"s\": 25135,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25443,\n \"s\": 25435,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25465,\n \"s\": 25443,\n \"text\": \"Machine Name is Check\"\n },\n {\n \"code\": null,\n \"e\": 25479,\n \"s\": 25465,\n \"text\": \"avtarkumar719\"\n },\n {\n \"code\": null,\n \"e\": 25495,\n \"s\": 25479,\n \"text\": \"CSharp-programs\"\n },\n {\n \"code\": null,\n \"e\": 25502,\n \"s\": 25495,\n \"text\": \"Picked\"\n },\n {\n \"code\": null,\n \"e\": 25505,\n \"s\": 25502,\n \"text\": \"C#\"\n },\n {\n \"code\": null,\n \"e\": 25603,\n \"s\": 25505,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 25621,\n \"s\": 25603,\n \"text\": \"Destructors in C#\"\n },\n {\n \"code\": null,\n \"e\": 25644,\n \"s\": 25621,\n \"text\": \"Extension Method in C#\"\n },\n {\n \"code\": null,\n \"e\": 25672,\n \"s\": 25644,\n \"text\": \"HashSet in C# with Examples\"\n },\n {\n \"code\": null,\n \"e\": 25712,\n \"s\": 25672,\n \"text\": \"Top 50 C# Interview Questions & Answers\"\n },\n {\n \"code\": null,\n \"e\": 25755,\n \"s\": 25712,\n \"text\": \"C# | How to insert an element in an Array?\"\n },\n {\n \"code\": null,\n \"e\": 25777,\n \"s\": 25755,\n \"text\": \"Partial Classes in C#\"\n },\n {\n \"code\": null,\n \"e\": 25794,\n \"s\": 25777,\n \"text\": \"C# | Inheritance\"\n },\n {\n \"code\": null,\n \"e\": 25810,\n \"s\": 25794,\n \"text\": \"C# | List Class\"\n },\n {\n \"code\": null,\n \"e\": 25860,\n \"s\": 25810,\n \"text\": \"Difference between Hashtable and Dictionary in C#\"\n }\n]"}}},{"rowIdx":530,"cells":{"title":{"kind":"string","value":"ByteArrayOutputStream write() method in Java with Examples - GeeksforGeeks"},"text":{"kind":"string","value":"28 May, 2020\nThe write() method of ByteArrayOutputStream class in Java is used in two ways:\n1. The write(int) method of ByteArrayOutputStream class in Java is used to write the specified byte to the ByteArrayOutputStream. This specified byte is passed as integer type parameter in this write() method. This write() method writes single byte at a time.\nSyntax:\npublic void write(int b)\n\nSpecified By: This method is specified by write() method of OutputStream class.\nParameters: This method accepts one parameter b which represents the byte to be written.\nReturn value: The method does not return any value.\nExceptions: This method does not throw any exception.\nBelow program illustrates write(int) method in ByteArrayOutputStream class in IO package:\nProgram:\n// Java program to illustrate// ByteArrayOutputStream write(int) method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byteArrayOutputStream ByteArrayOutputStream byteArrayOutStr = new ByteArrayOutputStream(); // Write byte // to byteArrayOutputStream byteArrayOutStr.write(71); byteArrayOutStr.write(69); byteArrayOutStr.write(69); byteArrayOutStr.write(75); byteArrayOutStr.write(83); // Print the byteArrayOutputStream System.out.println( byteArrayOutStr.toString()); }}\nGEEKS\n\n2. The write(byte[ ], int, int) method of ByteArrayOutputStream class in Java is used to write the given number of bytes from the given byte array starting at given offset of the byte array to theByteArrayOutputStream. This method is different from the above write() method as it can write several bytes at a time.\nSyntax:\npublic void write(byte[ ] b,\n int offset,\n int length)\n\nOverrides: This method overrides write() method of OutputStream class.\nParameters: This method accepts three parameters:\nb – It represents the byte array.\noffset – It represents the starting index in the byte array.\nlength – It represents the number of bytes to be written.\nReturn value: The method does not return any value.\nExceptions: This method does not throw any exception.\nBelow program illustrates write(byte[ ], int, int) method in ByteArrayOutputStream class in IO package:\nProgram:\n// Java program to illustrate// ByteArrayOutputStream// write(byte[ ], int, int) method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byteArrayOutputStream ByteArrayOutputStream byteArrayOutStr = new ByteArrayOutputStream(); // Create byte array byte[] buf = { 71, 69, 69, 75, 83, 70, 79, 82, 71, 69, 69, 75, 83 }; // Write byte array // to byteArrayOutputStream byteArrayOutStr.write(buf, 8, 5); // Print the byteArrayOutputStream System.out.println( byteArrayOutStr.toString()); }}\nGEEKS\n\nReferences:1. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayOutputStream.html#write(int)2. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayOutputStream.html#write(byte%5B%5D, int, int)\nJava-Functions\nJava-IO package\nJava\nJava\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nStream In Java\nConstructors in Java\nExceptions in Java\nFunctional Interfaces in Java\nDifferent ways of Reading a text file in Java\nGenerics in Java\nIntroduction to Java\nComparator Interface in Java with Examples\nInternal Working of HashMap in Java\nStrings in Java"},"parsed":{"kind":"list like","value":[{"code":null,"e":25225,"s":25197,"text":"\n28 May, 2020"},{"code":null,"e":25304,"s":25225,"text":"The write() method of ByteArrayOutputStream class in Java is used in two ways:"},{"code":null,"e":25564,"s":25304,"text":"1. The write(int) method of ByteArrayOutputStream class in Java is used to write the specified byte to the ByteArrayOutputStream. This specified byte is passed as integer type parameter in this write() method. This write() method writes single byte at a time."},{"code":null,"e":25572,"s":25564,"text":"Syntax:"},{"code":null,"e":25598,"s":25572,"text":"public void write(int b)\n"},{"code":null,"e":25678,"s":25598,"text":"Specified By: This method is specified by write() method of OutputStream class."},{"code":null,"e":25767,"s":25678,"text":"Parameters: This method accepts one parameter b which represents the byte to be written."},{"code":null,"e":25819,"s":25767,"text":"Return value: The method does not return any value."},{"code":null,"e":25873,"s":25819,"text":"Exceptions: This method does not throw any exception."},{"code":null,"e":25963,"s":25873,"text":"Below program illustrates write(int) method in ByteArrayOutputStream class in IO package:"},{"code":null,"e":25972,"s":25963,"text":"Program:"},{"code":"// Java program to illustrate// ByteArrayOutputStream write(int) method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byteArrayOutputStream ByteArrayOutputStream byteArrayOutStr = new ByteArrayOutputStream(); // Write byte // to byteArrayOutputStream byteArrayOutStr.write(71); byteArrayOutStr.write(69); byteArrayOutStr.write(69); byteArrayOutStr.write(75); byteArrayOutStr.write(83); // Print the byteArrayOutputStream System.out.println( byteArrayOutStr.toString()); }}","e":26637,"s":25972,"text":null},{"code":null,"e":26644,"s":26637,"text":"GEEKS\n"},{"code":null,"e":26959,"s":26644,"text":"2. The write(byte[ ], int, int) method of ByteArrayOutputStream class in Java is used to write the given number of bytes from the given byte array starting at given offset of the byte array to theByteArrayOutputStream. This method is different from the above write() method as it can write several bytes at a time."},{"code":null,"e":26967,"s":26959,"text":"Syntax:"},{"code":null,"e":27057,"s":26967,"text":"public void write(byte[ ] b,\n int offset,\n int length)\n"},{"code":null,"e":27128,"s":27057,"text":"Overrides: This method overrides write() method of OutputStream class."},{"code":null,"e":27178,"s":27128,"text":"Parameters: This method accepts three parameters:"},{"code":null,"e":27212,"s":27178,"text":"b – It represents the byte array."},{"code":null,"e":27273,"s":27212,"text":"offset – It represents the starting index in the byte array."},{"code":null,"e":27331,"s":27273,"text":"length – It represents the number of bytes to be written."},{"code":null,"e":27383,"s":27331,"text":"Return value: The method does not return any value."},{"code":null,"e":27437,"s":27383,"text":"Exceptions: This method does not throw any exception."},{"code":null,"e":27541,"s":27437,"text":"Below program illustrates write(byte[ ], int, int) method in ByteArrayOutputStream class in IO package:"},{"code":null,"e":27550,"s":27541,"text":"Program:"},{"code":"// Java program to illustrate// ByteArrayOutputStream// write(byte[ ], int, int) method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byteArrayOutputStream ByteArrayOutputStream byteArrayOutStr = new ByteArrayOutputStream(); // Create byte array byte[] buf = { 71, 69, 69, 75, 83, 70, 79, 82, 71, 69, 69, 75, 83 }; // Write byte array // to byteArrayOutputStream byteArrayOutStr.write(buf, 8, 5); // Print the byteArrayOutputStream System.out.println( byteArrayOutStr.toString()); }}","e":28248,"s":27550,"text":null},{"code":null,"e":28255,"s":28248,"text":"GEEKS\n"},{"code":null,"e":28466,"s":28255,"text":"References:1. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayOutputStream.html#write(int)2. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayOutputStream.html#write(byte%5B%5D, int, int)"},{"code":null,"e":28481,"s":28466,"text":"Java-Functions"},{"code":null,"e":28497,"s":28481,"text":"Java-IO package"},{"code":null,"e":28502,"s":28497,"text":"Java"},{"code":null,"e":28507,"s":28502,"text":"Java"},{"code":null,"e":28605,"s":28507,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":28620,"s":28605,"text":"Stream In Java"},{"code":null,"e":28641,"s":28620,"text":"Constructors in Java"},{"code":null,"e":28660,"s":28641,"text":"Exceptions in Java"},{"code":null,"e":28690,"s":28660,"text":"Functional Interfaces in Java"},{"code":null,"e":28736,"s":28690,"text":"Different ways of Reading a text file in Java"},{"code":null,"e":28753,"s":28736,"text":"Generics in Java"},{"code":null,"e":28774,"s":28753,"text":"Introduction to Java"},{"code":null,"e":28817,"s":28774,"text":"Comparator Interface in Java with Examples"},{"code":null,"e":28853,"s":28817,"text":"Internal Working of HashMap in Java"}],"string":"[\n {\n \"code\": null,\n \"e\": 25225,\n \"s\": 25197,\n \"text\": \"\\n28 May, 2020\"\n },\n {\n \"code\": null,\n \"e\": 25304,\n \"s\": 25225,\n \"text\": \"The write() method of ByteArrayOutputStream class in Java is used in two ways:\"\n },\n {\n \"code\": null,\n \"e\": 25564,\n \"s\": 25304,\n \"text\": \"1. The write(int) method of ByteArrayOutputStream class in Java is used to write the specified byte to the ByteArrayOutputStream. This specified byte is passed as integer type parameter in this write() method. This write() method writes single byte at a time.\"\n },\n {\n \"code\": null,\n \"e\": 25572,\n \"s\": 25564,\n \"text\": \"Syntax:\"\n },\n {\n \"code\": null,\n \"e\": 25598,\n \"s\": 25572,\n \"text\": \"public void write(int b)\\n\"\n },\n {\n \"code\": null,\n \"e\": 25678,\n \"s\": 25598,\n \"text\": \"Specified By: This method is specified by write() method of OutputStream class.\"\n },\n {\n \"code\": null,\n \"e\": 25767,\n \"s\": 25678,\n \"text\": \"Parameters: This method accepts one parameter b which represents the byte to be written.\"\n },\n {\n \"code\": null,\n \"e\": 25819,\n \"s\": 25767,\n \"text\": \"Return value: The method does not return any value.\"\n },\n {\n \"code\": null,\n \"e\": 25873,\n \"s\": 25819,\n \"text\": \"Exceptions: This method does not throw any exception.\"\n },\n {\n \"code\": null,\n \"e\": 25963,\n \"s\": 25873,\n \"text\": \"Below program illustrates write(int) method in ByteArrayOutputStream class in IO package:\"\n },\n {\n \"code\": null,\n \"e\": 25972,\n \"s\": 25963,\n \"text\": \"Program:\"\n },\n {\n \"code\": \"// Java program to illustrate// ByteArrayOutputStream write(int) method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byteArrayOutputStream ByteArrayOutputStream byteArrayOutStr = new ByteArrayOutputStream(); // Write byte // to byteArrayOutputStream byteArrayOutStr.write(71); byteArrayOutStr.write(69); byteArrayOutStr.write(69); byteArrayOutStr.write(75); byteArrayOutStr.write(83); // Print the byteArrayOutputStream System.out.println( byteArrayOutStr.toString()); }}\",\n \"e\": 26637,\n \"s\": 25972,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26644,\n \"s\": 26637,\n \"text\": \"GEEKS\\n\"\n },\n {\n \"code\": null,\n \"e\": 26959,\n \"s\": 26644,\n \"text\": \"2. The write(byte[ ], int, int) method of ByteArrayOutputStream class in Java is used to write the given number of bytes from the given byte array starting at given offset of the byte array to theByteArrayOutputStream. This method is different from the above write() method as it can write several bytes at a time.\"\n },\n {\n \"code\": null,\n \"e\": 26967,\n \"s\": 26959,\n \"text\": \"Syntax:\"\n },\n {\n \"code\": null,\n \"e\": 27057,\n \"s\": 26967,\n \"text\": \"public void write(byte[ ] b,\\n int offset,\\n int length)\\n\"\n },\n {\n \"code\": null,\n \"e\": 27128,\n \"s\": 27057,\n \"text\": \"Overrides: This method overrides write() method of OutputStream class.\"\n },\n {\n \"code\": null,\n \"e\": 27178,\n \"s\": 27128,\n \"text\": \"Parameters: This method accepts three parameters:\"\n },\n {\n \"code\": null,\n \"e\": 27212,\n \"s\": 27178,\n \"text\": \"b – It represents the byte array.\"\n },\n {\n \"code\": null,\n \"e\": 27273,\n \"s\": 27212,\n \"text\": \"offset – It represents the starting index in the byte array.\"\n },\n {\n \"code\": null,\n \"e\": 27331,\n \"s\": 27273,\n \"text\": \"length – It represents the number of bytes to be written.\"\n },\n {\n \"code\": null,\n \"e\": 27383,\n \"s\": 27331,\n \"text\": \"Return value: The method does not return any value.\"\n },\n {\n \"code\": null,\n \"e\": 27437,\n \"s\": 27383,\n \"text\": \"Exceptions: This method does not throw any exception.\"\n },\n {\n \"code\": null,\n \"e\": 27541,\n \"s\": 27437,\n \"text\": \"Below program illustrates write(byte[ ], int, int) method in ByteArrayOutputStream class in IO package:\"\n },\n {\n \"code\": null,\n \"e\": 27550,\n \"s\": 27541,\n \"text\": \"Program:\"\n },\n {\n \"code\": \"// Java program to illustrate// ByteArrayOutputStream// write(byte[ ], int, int) method import java.io.*; public class GFG { public static void main(String[] args) throws Exception { // Create byteArrayOutputStream ByteArrayOutputStream byteArrayOutStr = new ByteArrayOutputStream(); // Create byte array byte[] buf = { 71, 69, 69, 75, 83, 70, 79, 82, 71, 69, 69, 75, 83 }; // Write byte array // to byteArrayOutputStream byteArrayOutStr.write(buf, 8, 5); // Print the byteArrayOutputStream System.out.println( byteArrayOutStr.toString()); }}\",\n \"e\": 28248,\n \"s\": 27550,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 28255,\n \"s\": 28248,\n \"text\": \"GEEKS\\n\"\n },\n {\n \"code\": null,\n \"e\": 28466,\n \"s\": 28255,\n \"text\": \"References:1. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayOutputStream.html#write(int)2. https://docs.oracle.com/javase/10/docs/api/java/io/ByteArrayOutputStream.html#write(byte%5B%5D, int, int)\"\n },\n {\n \"code\": null,\n \"e\": 28481,\n \"s\": 28466,\n \"text\": \"Java-Functions\"\n },\n {\n \"code\": null,\n \"e\": 28497,\n \"s\": 28481,\n \"text\": \"Java-IO package\"\n },\n {\n \"code\": null,\n \"e\": 28502,\n \"s\": 28497,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 28507,\n \"s\": 28502,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 28605,\n \"s\": 28507,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 28620,\n \"s\": 28605,\n \"text\": \"Stream In Java\"\n },\n {\n \"code\": null,\n \"e\": 28641,\n \"s\": 28620,\n \"text\": \"Constructors in Java\"\n },\n {\n \"code\": null,\n \"e\": 28660,\n \"s\": 28641,\n \"text\": \"Exceptions in Java\"\n },\n {\n \"code\": null,\n \"e\": 28690,\n \"s\": 28660,\n \"text\": \"Functional Interfaces in Java\"\n },\n {\n \"code\": null,\n \"e\": 28736,\n \"s\": 28690,\n \"text\": \"Different ways of Reading a text file in Java\"\n },\n {\n \"code\": null,\n \"e\": 28753,\n \"s\": 28736,\n \"text\": \"Generics in Java\"\n },\n {\n \"code\": null,\n \"e\": 28774,\n \"s\": 28753,\n \"text\": \"Introduction to Java\"\n },\n {\n \"code\": null,\n \"e\": 28817,\n \"s\": 28774,\n \"text\": \"Comparator Interface in Java with Examples\"\n },\n {\n \"code\": null,\n \"e\": 28853,\n \"s\": 28817,\n \"text\": \"Internal Working of HashMap in Java\"\n }\n]"}}},{"rowIdx":531,"cells":{"title":{"kind":"string","value":"How to Add JAR file to Classpath in Java? - GeeksforGeeks"},"text":{"kind":"string","value":"07 Aug, 2021\nJAR is an abbreviation of JAVA Archive. It is used for aggregating multiple files into a single one, and it is present in a ZIP format. It can also be used as an archiving tool but the main intention to use this file for development is that the Java applets and their components(.class files) can be easily downloaded to a client browser in just one single HTTP request, rather than establishing a new connection for just one thing. This will improve the speed with which applets can be loaded onto a web page and starts their work. It also supports compression, which reduces the file size and the download time will be improved.\nMethods: JAR file can be added in a classpath in two different ways\nUsing eclipse or any IDEUsing command line\nUsing eclipse or any IDE\nUsing command line\nStep 1: Right-Click on your project name\nStep 2: Click on Build Path\nStep 3: Click on configure build path\nStep 4: Click on libraries and click on “Add External JARs”\nStep 5: Select the jar file from the folder where you have saved your jar file\nStep 6: Click on Apply and Ok.\nCommand 1: By including JAR name in CLASSPATH environment variable\nCLASSPATH environment variable is not case-sensitive. It can be either Classpath or classpath which is similar to PATH environment variable which we can use to locate Java binaries like javaw and java command.\nCommand 2: By including name of JAR file in -a classpath command-line option\nThis option is viable when we are passing –classpath option while running our java program like java –classpath $(CLASSPATH) Main. In this case, CLASSPATH shell variable contains the list of Jar file which is required by the application. One of the best advantages of using classpath command-line option is that it allows us to use every application to have its own set of JAR classpath. In other cases, it’s not available to all Java program which runs on the same host.\nCommand 3: By including the jar name in the Class-Path option in the manifest\nWhen we are running an executable JAR file we always notice that the Class-Path attribute in the file inside the META-INF folder. Therefore, we can say that Class-Path is given the highest priority and it overrides the CLASSPATH environment variable as well as –classpath command-line option. Henceforth, we can deduce that its a good place to include all JAR files required by Java Application.\nCommand 4: By using Java 6 wildcard option to include multiple JAR\nFrom Java 1.6+ onwards we can use a wildcard for including all jars in a directory to set classpath or else to provide Java program using classpath command-line option. We can illustrate the Java command example to add multiple JAR into classpath using Java 6 wildcard method as follows,\njava.exe -classpath D:\\lib\\*Main\nIn this method, we include all JAR files inside ‘D:\\lib’ directory into the classpath. We must ensure that syntax is correct. Some more important points about using Java 6 wildcard to include multiple JAR in classpath are as follows:\nUse * instead of *.jar\nWhenever JAR and classfile are present in the same directory then we need to include both of them separately\n Java -classpath /classes: /lib/*\nIn Java 6 wildcard which includes all JAR, it will not search for JARs in a subdirectory.\nWildcard is included in all JAR is not honored in the case when we are running Java program with JAR file and having Class-Path attribute in the manifest file. JAR wildcard is honored when we use –cp or –classpath option\nCommand 5: Adding JAR in ext directory example be it ‘C:\\Program Files\\Java\\jdk1.6.0\\jre\\lib\\ext’\nBy using the above method we can add multiple JAR in our classpath. JAR from ext directory can be loaded by extension Classloader. It has been given higher priority than application class loader which loads JAR from either CLASSPATH environment variable or else directories specified in –classpath or –cp\nas5853535\nPicked\nHow To\nJava\nJava\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nHow to Align Text in HTML?\nHow to Install OpenCV for Python on Windows?\nHow to filter object array based on attributes?\nJava Tutorial\nHow to Install FFmpeg on Windows?\nArrays in Java\nSplit() String method in Java with examples\nObject Oriented Programming (OOPs) Concept in Java\nArrays.sort() in Java with examples\nHashMap in Java with Examples"},"parsed":{"kind":"list like","value":[{"code":null,"e":25681,"s":25653,"text":"\n07 Aug, 2021"},{"code":null,"e":26312,"s":25681,"text":"JAR is an abbreviation of JAVA Archive. It is used for aggregating multiple files into a single one, and it is present in a ZIP format. It can also be used as an archiving tool but the main intention to use this file for development is that the Java applets and their components(.class files) can be easily downloaded to a client browser in just one single HTTP request, rather than establishing a new connection for just one thing. This will improve the speed with which applets can be loaded onto a web page and starts their work. It also supports compression, which reduces the file size and the download time will be improved."},{"code":null,"e":26380,"s":26312,"text":"Methods: JAR file can be added in a classpath in two different ways"},{"code":null,"e":26423,"s":26380,"text":"Using eclipse or any IDEUsing command line"},{"code":null,"e":26448,"s":26423,"text":"Using eclipse or any IDE"},{"code":null,"e":26467,"s":26448,"text":"Using command line"},{"code":null,"e":26508,"s":26467,"text":"Step 1: Right-Click on your project name"},{"code":null,"e":26536,"s":26508,"text":"Step 2: Click on Build Path"},{"code":null,"e":26575,"s":26536,"text":"Step 3: Click on configure build path"},{"code":null,"e":26635,"s":26575,"text":"Step 4: Click on libraries and click on “Add External JARs”"},{"code":null,"e":26714,"s":26635,"text":"Step 5: Select the jar file from the folder where you have saved your jar file"},{"code":null,"e":26745,"s":26714,"text":"Step 6: Click on Apply and Ok."},{"code":null,"e":26812,"s":26745,"text":"Command 1: By including JAR name in CLASSPATH environment variable"},{"code":null,"e":27022,"s":26812,"text":"CLASSPATH environment variable is not case-sensitive. It can be either Classpath or classpath which is similar to PATH environment variable which we can use to locate Java binaries like javaw and java command."},{"code":null,"e":27099,"s":27022,"text":"Command 2: By including name of JAR file in -a classpath command-line option"},{"code":null,"e":27571,"s":27099,"text":"This option is viable when we are passing –classpath option while running our java program like java –classpath $(CLASSPATH) Main. In this case, CLASSPATH shell variable contains the list of Jar file which is required by the application. One of the best advantages of using classpath command-line option is that it allows us to use every application to have its own set of JAR classpath. In other cases, it’s not available to all Java program which runs on the same host."},{"code":null,"e":27649,"s":27571,"text":"Command 3: By including the jar name in the Class-Path option in the manifest"},{"code":null,"e":28045,"s":27649,"text":"When we are running an executable JAR file we always notice that the Class-Path attribute in the file inside the META-INF folder. Therefore, we can say that Class-Path is given the highest priority and it overrides the CLASSPATH environment variable as well as –classpath command-line option. Henceforth, we can deduce that its a good place to include all JAR files required by Java Application."},{"code":null,"e":28112,"s":28045,"text":"Command 4: By using Java 6 wildcard option to include multiple JAR"},{"code":null,"e":28400,"s":28112,"text":"From Java 1.6+ onwards we can use a wildcard for including all jars in a directory to set classpath or else to provide Java program using classpath command-line option. We can illustrate the Java command example to add multiple JAR into classpath using Java 6 wildcard method as follows,"},{"code":null,"e":28433,"s":28400,"text":"java.exe -classpath D:\\lib\\*Main"},{"code":null,"e":28667,"s":28433,"text":"In this method, we include all JAR files inside ‘D:\\lib’ directory into the classpath. We must ensure that syntax is correct. Some more important points about using Java 6 wildcard to include multiple JAR in classpath are as follows:"},{"code":null,"e":28690,"s":28667,"text":"Use * instead of *.jar"},{"code":null,"e":28799,"s":28690,"text":"Whenever JAR and classfile are present in the same directory then we need to include both of them separately"},{"code":null,"e":28834,"s":28799,"text":" Java -classpath /classes: /lib/*"},{"code":null,"e":28924,"s":28834,"text":"In Java 6 wildcard which includes all JAR, it will not search for JARs in a subdirectory."},{"code":null,"e":29145,"s":28924,"text":"Wildcard is included in all JAR is not honored in the case when we are running Java program with JAR file and having Class-Path attribute in the manifest file. JAR wildcard is honored when we use –cp or –classpath option"},{"code":null,"e":29243,"s":29145,"text":"Command 5: Adding JAR in ext directory example be it ‘C:\\Program Files\\Java\\jdk1.6.0\\jre\\lib\\ext’"},{"code":null,"e":29548,"s":29243,"text":"By using the above method we can add multiple JAR in our classpath. JAR from ext directory can be loaded by extension Classloader. It has been given higher priority than application class loader which loads JAR from either CLASSPATH environment variable or else directories specified in –classpath or –cp"},{"code":null,"e":29558,"s":29548,"text":"as5853535"},{"code":null,"e":29565,"s":29558,"text":"Picked"},{"code":null,"e":29572,"s":29565,"text":"How To"},{"code":null,"e":29577,"s":29572,"text":"Java"},{"code":null,"e":29582,"s":29577,"text":"Java"},{"code":null,"e":29680,"s":29582,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":29707,"s":29680,"text":"How to Align Text in HTML?"},{"code":null,"e":29752,"s":29707,"text":"How to Install OpenCV for Python on Windows?"},{"code":null,"e":29800,"s":29752,"text":"How to filter object array based on attributes?"},{"code":null,"e":29814,"s":29800,"text":"Java Tutorial"},{"code":null,"e":29848,"s":29814,"text":"How to Install FFmpeg on Windows?"},{"code":null,"e":29863,"s":29848,"text":"Arrays in Java"},{"code":null,"e":29907,"s":29863,"text":"Split() String method in Java with examples"},{"code":null,"e":29958,"s":29907,"text":"Object Oriented Programming (OOPs) Concept in Java"},{"code":null,"e":29994,"s":29958,"text":"Arrays.sort() in Java with examples"}],"string":"[\n {\n \"code\": null,\n \"e\": 25681,\n \"s\": 25653,\n \"text\": \"\\n07 Aug, 2021\"\n },\n {\n \"code\": null,\n \"e\": 26312,\n \"s\": 25681,\n \"text\": \"JAR is an abbreviation of JAVA Archive. It is used for aggregating multiple files into a single one, and it is present in a ZIP format. It can also be used as an archiving tool but the main intention to use this file for development is that the Java applets and their components(.class files) can be easily downloaded to a client browser in just one single HTTP request, rather than establishing a new connection for just one thing. This will improve the speed with which applets can be loaded onto a web page and starts their work. It also supports compression, which reduces the file size and the download time will be improved.\"\n },\n {\n \"code\": null,\n \"e\": 26380,\n \"s\": 26312,\n \"text\": \"Methods: JAR file can be added in a classpath in two different ways\"\n },\n {\n \"code\": null,\n \"e\": 26423,\n \"s\": 26380,\n \"text\": \"Using eclipse or any IDEUsing command line\"\n },\n {\n \"code\": null,\n \"e\": 26448,\n \"s\": 26423,\n \"text\": \"Using eclipse or any IDE\"\n },\n {\n \"code\": null,\n \"e\": 26467,\n \"s\": 26448,\n \"text\": \"Using command line\"\n },\n {\n \"code\": null,\n \"e\": 26508,\n \"s\": 26467,\n \"text\": \"Step 1: Right-Click on your project name\"\n },\n {\n \"code\": null,\n \"e\": 26536,\n \"s\": 26508,\n \"text\": \"Step 2: Click on Build Path\"\n },\n {\n \"code\": null,\n \"e\": 26575,\n \"s\": 26536,\n \"text\": \"Step 3: Click on configure build path\"\n },\n {\n \"code\": null,\n \"e\": 26635,\n \"s\": 26575,\n \"text\": \"Step 4: Click on libraries and click on “Add External JARs”\"\n },\n {\n \"code\": null,\n \"e\": 26714,\n \"s\": 26635,\n \"text\": \"Step 5: Select the jar file from the folder where you have saved your jar file\"\n },\n {\n \"code\": null,\n \"e\": 26745,\n \"s\": 26714,\n \"text\": \"Step 6: Click on Apply and Ok.\"\n },\n {\n \"code\": null,\n \"e\": 26812,\n \"s\": 26745,\n \"text\": \"Command 1: By including JAR name in CLASSPATH environment variable\"\n },\n {\n \"code\": null,\n \"e\": 27022,\n \"s\": 26812,\n \"text\": \"CLASSPATH environment variable is not case-sensitive. It can be either Classpath or classpath which is similar to PATH environment variable which we can use to locate Java binaries like javaw and java command.\"\n },\n {\n \"code\": null,\n \"e\": 27099,\n \"s\": 27022,\n \"text\": \"Command 2: By including name of JAR file in -a classpath command-line option\"\n },\n {\n \"code\": null,\n \"e\": 27571,\n \"s\": 27099,\n \"text\": \"This option is viable when we are passing –classpath option while running our java program like java –classpath $(CLASSPATH) Main. In this case, CLASSPATH shell variable contains the list of Jar file which is required by the application. One of the best advantages of using classpath command-line option is that it allows us to use every application to have its own set of JAR classpath. In other cases, it’s not available to all Java program which runs on the same host.\"\n },\n {\n \"code\": null,\n \"e\": 27649,\n \"s\": 27571,\n \"text\": \"Command 3: By including the jar name in the Class-Path option in the manifest\"\n },\n {\n \"code\": null,\n \"e\": 28045,\n \"s\": 27649,\n \"text\": \"When we are running an executable JAR file we always notice that the Class-Path attribute in the file inside the META-INF folder. Therefore, we can say that Class-Path is given the highest priority and it overrides the CLASSPATH environment variable as well as –classpath command-line option. Henceforth, we can deduce that its a good place to include all JAR files required by Java Application.\"\n },\n {\n \"code\": null,\n \"e\": 28112,\n \"s\": 28045,\n \"text\": \"Command 4: By using Java 6 wildcard option to include multiple JAR\"\n },\n {\n \"code\": null,\n \"e\": 28400,\n \"s\": 28112,\n \"text\": \"From Java 1.6+ onwards we can use a wildcard for including all jars in a directory to set classpath or else to provide Java program using classpath command-line option. We can illustrate the Java command example to add multiple JAR into classpath using Java 6 wildcard method as follows,\"\n },\n {\n \"code\": null,\n \"e\": 28433,\n \"s\": 28400,\n \"text\": \"java.exe -classpath D:\\\\lib\\\\*Main\"\n },\n {\n \"code\": null,\n \"e\": 28667,\n \"s\": 28433,\n \"text\": \"In this method, we include all JAR files inside ‘D:\\\\lib’ directory into the classpath. We must ensure that syntax is correct. Some more important points about using Java 6 wildcard to include multiple JAR in classpath are as follows:\"\n },\n {\n \"code\": null,\n \"e\": 28690,\n \"s\": 28667,\n \"text\": \"Use * instead of *.jar\"\n },\n {\n \"code\": null,\n \"e\": 28799,\n \"s\": 28690,\n \"text\": \"Whenever JAR and classfile are present in the same directory then we need to include both of them separately\"\n },\n {\n \"code\": null,\n \"e\": 28834,\n \"s\": 28799,\n \"text\": \" Java -classpath /classes: /lib/*\"\n },\n {\n \"code\": null,\n \"e\": 28924,\n \"s\": 28834,\n \"text\": \"In Java 6 wildcard which includes all JAR, it will not search for JARs in a subdirectory.\"\n },\n {\n \"code\": null,\n \"e\": 29145,\n \"s\": 28924,\n \"text\": \"Wildcard is included in all JAR is not honored in the case when we are running Java program with JAR file and having Class-Path attribute in the manifest file. JAR wildcard is honored when we use –cp or –classpath option\"\n },\n {\n \"code\": null,\n \"e\": 29243,\n \"s\": 29145,\n \"text\": \"Command 5: Adding JAR in ext directory example be it ‘C:\\\\Program Files\\\\Java\\\\jdk1.6.0\\\\jre\\\\lib\\\\ext’\"\n },\n {\n \"code\": null,\n \"e\": 29548,\n \"s\": 29243,\n \"text\": \"By using the above method we can add multiple JAR in our classpath. JAR from ext directory can be loaded by extension Classloader. It has been given higher priority than application class loader which loads JAR from either CLASSPATH environment variable or else directories specified in –classpath or –cp\"\n },\n {\n \"code\": null,\n \"e\": 29558,\n \"s\": 29548,\n \"text\": \"as5853535\"\n },\n {\n \"code\": null,\n \"e\": 29565,\n \"s\": 29558,\n \"text\": \"Picked\"\n },\n {\n \"code\": null,\n \"e\": 29572,\n \"s\": 29565,\n \"text\": \"How To\"\n },\n {\n \"code\": null,\n \"e\": 29577,\n \"s\": 29572,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 29582,\n \"s\": 29577,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 29680,\n \"s\": 29582,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 29707,\n \"s\": 29680,\n \"text\": \"How to Align Text in HTML?\"\n },\n {\n \"code\": null,\n \"e\": 29752,\n \"s\": 29707,\n \"text\": \"How to Install OpenCV for Python on Windows?\"\n },\n {\n \"code\": null,\n \"e\": 29800,\n \"s\": 29752,\n \"text\": \"How to filter object array based on attributes?\"\n },\n {\n \"code\": null,\n \"e\": 29814,\n \"s\": 29800,\n \"text\": \"Java Tutorial\"\n },\n {\n \"code\": null,\n \"e\": 29848,\n \"s\": 29814,\n \"text\": \"How to Install FFmpeg on Windows?\"\n },\n {\n \"code\": null,\n \"e\": 29863,\n \"s\": 29848,\n \"text\": \"Arrays in Java\"\n },\n {\n \"code\": null,\n \"e\": 29907,\n \"s\": 29863,\n \"text\": \"Split() String method in Java with examples\"\n },\n {\n \"code\": null,\n \"e\": 29958,\n \"s\": 29907,\n \"text\": \"Object Oriented Programming (OOPs) Concept in Java\"\n },\n {\n \"code\": null,\n \"e\": 29994,\n \"s\": 29958,\n \"text\": \"Arrays.sort() in Java with examples\"\n }\n]"}}},{"rowIdx":532,"cells":{"title":{"kind":"string","value":"Mapping CSV to JavaBeans Using OpenCSV - GeeksforGeeks"},"text":{"kind":"string","value":"17 Jul, 2018\nOpenCSV provides classes to map CSV file to a list of Java-beans. CsvToBean class is used to map CSV data to JavaBeans. The CSV data can be parsed to a bean, but what is required to be done is to define the mapping strategy and pass the strategy to CsvToBean to parse the data into a bean. HeaderColumnNameTranslateMappingStrategy is the mapping strategy which maps the column id to the java bean property.\nFirst add OpenCSV to the project.For maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1For Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'You can Download OpenCSV Jar and include in your project class path.Mapping CSV to JavaBeansMapping a CSV to JavaBeans is simple and easy process. Just follow these couple of steps:Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\nCreate the object of CSVReade and CsvToBean classString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\n\nFirst add OpenCSV to the project.For maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1For Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'You can Download OpenCSV Jar and include in your project class path.\nFor maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1\n com.opencsv opencsv 4.1\nFor Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'\ncompile group: 'com.opencsv', name: 'opencsv', version: '4.1'\nYou can Download OpenCSV Jar and include in your project class path.\nMapping CSV to JavaBeansMapping a CSV to JavaBeans is simple and easy process. Just follow these couple of steps:Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\nCreate the object of CSVReade and CsvToBean classString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\n\nCreate a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\nCreate the object of CSVReade and CsvToBean classString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\n\nCreate a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\nThen add all the column id of csv file with their corresponding javabean property.\nMap mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\n\nThen add all the column id of csv file with their corresponding javabean property.\nCreate HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\n\nHeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\n\nCreate the object of CSVReade and CsvToBean classString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\n\nString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\n\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\n\nList list = csv.parse(strategy, csvReader);\n\nExample: Let’ s convert csv file containing Student data to Student objects having attribute Name, RollNo, Department, Result, Pointer.\nStudentData.csv:\n\nname, rollno, department, result, cgpa\namar, 42, cse, pass, 8.6\nrohini, 21, ece, fail, 3.2\naman, 23, cse, pass, 8.9\nrahul, 45, ee, fail, 4.6\npratik, 65, cse, pass, 7.2\nraunak, 23, me, pass, 9.1\nsuvam, 68, me, pass, 8.2\n\nFirst create a Student Class with Attributes Name, RollNo, Department, Result, Pointer. Then Create a main class which map csv data to JavaBeans object.\nPrograms:\nStudent.javapublic class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \"Student [Name=\" + Name + \", RollNo=\" + RollNo + \", Department = \" + Department + \", Result = \" + Result + \", Pointer=\" + Pointer + \"]\"; }}csvtobean.javaimport java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\"name\", \"Name\"); mapping.put(\"rollno\", \"RollNo\"); mapping.put(\"department\", \"Department\"); mapping.put(\"result\", \"Result\"); mapping.put(\"cgpa\", \"Pointer\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\"D:\\\\EclipseWorkSpace\\\\CSVOperations\\\\StudentData.csv\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}\nStudent.javapublic class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \"Student [Name=\" + Name + \", RollNo=\" + RollNo + \", Department = \" + Department + \", Result = \" + Result + \", Pointer=\" + Pointer + \"]\"; }}\npublic class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \"Student [Name=\" + Name + \", RollNo=\" + RollNo + \", Department = \" + Department + \", Result = \" + Result + \", Pointer=\" + Pointer + \"]\"; }}\ncsvtobean.javaimport java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\"name\", \"Name\"); mapping.put(\"rollno\", \"RollNo\"); mapping.put(\"department\", \"Department\"); mapping.put(\"result\", \"Result\"); mapping.put(\"cgpa\", \"Pointer\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\"D:\\\\EclipseWorkSpace\\\\CSVOperations\\\\StudentData.csv\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}\nimport java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\"name\", \"Name\"); mapping.put(\"rollno\", \"RollNo\"); mapping.put(\"department\", \"Department\"); mapping.put(\"result\", \"Result\"); mapping.put(\"cgpa\", \"Pointer\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\"D:\\\\EclipseWorkSpace\\\\CSVOperations\\\\StudentData.csv\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}\nOutput:\nStudent [Name=amar, RollNo=42, Department=cse, Result=pass, Pointer=8.6]\nStudent [Name=rohini, RollNo=21, Department=ece, Result=fail, Pointer=3.2]\nStudent [Name=aman, RollNo=23, Department=cse, Result=pass, Pointer=8.9]\nStudent [Name=rahul, RollNo=45, Department=ee, Result=fail, Pointer=4.6]\nStudent [Name=pratik, RollNo=65, Department=cse, Result=pass, Pointer=7.2]\nStudent [Name=raunak, RollNo=23, Department=me, Result=pass, Pointer=9.1]\nStudent [Name=suvam, RollNo=68, Department=me, Result=pass, Pointer=8.2]\n\nReference: OpenCSV Documentation, CsvTOBean Documentation, MappingStrategy\nCSV\nJava\nJava\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nObject Oriented Programming (OOPs) Concept in Java\nHashMap in Java with Examples\nStream In Java\nInterfaces in Java\nHow to iterate any Map in Java\nArrayList in Java\nInitialize an ArrayList in Java\nStack Class in Java\nMultidimensional Arrays in Java\nSingleton Class in Java"},"parsed":{"kind":"list like","value":[{"code":null,"e":25830,"s":25802,"text":"\n17 Jul, 2018"},{"code":null,"e":26237,"s":25830,"text":"OpenCSV provides classes to map CSV file to a list of Java-beans. CsvToBean class is used to map CSV data to JavaBeans. The CSV data can be parsed to a bean, but what is required to be done is to define the mapping strategy and pass the strategy to CsvToBean to parse the data into a bean. HeaderColumnNameTranslateMappingStrategy is the mapping strategy which maps the column id to the java bean property."},{"code":null,"e":27624,"s":26237,"text":"First add OpenCSV to the project.For maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1For Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'You can Download OpenCSV Jar and include in your project class path.Mapping CSV to JavaBeansMapping a CSV to JavaBeans is simple and easy process. Just follow these couple of steps:Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\nCreate the object of CSVReade and CsvToBean classString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\n"},{"code":null,"e":28031,"s":27624,"text":"First add OpenCSV to the project.For maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1For Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'You can Download OpenCSV Jar and include in your project class path."},{"code":null,"e":28225,"s":28031,"text":"For maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1"},{"code":" com.opencsv opencsv 4.1","e":28347,"s":28225,"text":null},{"code":null,"e":28460,"s":28347,"text":"For Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'"},{"code":null,"e":28522,"s":28460,"text":"compile group: 'com.opencsv', name: 'opencsv', version: '4.1'"},{"code":null,"e":28591,"s":28522,"text":"You can Download OpenCSV Jar and include in your project class path."},{"code":null,"e":29572,"s":28591,"text":"Mapping CSV to JavaBeansMapping a CSV to JavaBeans is simple and easy process. Just follow these couple of steps:Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\nCreate the object of CSVReade and CsvToBean classString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\n"},{"code":null,"e":30440,"s":29572,"text":"Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\nCreate the object of CSVReade and CsvToBean classString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\n"},{"code":null,"e":30678,"s":30440,"text":"Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\nThen add all the column id of csv file with their corresponding javabean property."},{"code":null,"e":30764,"s":30678,"text":"Map mapping = new HashMap();\n mapping.put(\"column id \", \"javaBeanProperty\");\n"},{"code":null,"e":30847,"s":30764,"text":"Then add all the column id of csv file with their corresponding javabean property."},{"code":null,"e":31144,"s":30847,"text":"Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\n"},{"code":null,"e":31338,"s":31144,"text":"HeaderColumnNameTranslateMappingStrategy strategy =\n new HeaderColumnNameTranslateMappingStrategy();\n strategy.setType(JavaBeanObject.class);\n strategy.setColumnMapping(mapping);\n"},{"code":null,"e":31520,"s":31338,"text":"Create the object of CSVReade and CsvToBean classString csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\n"},{"code":null,"e":31653,"s":31520,"text":"String csvFilename = \"data.csv\";\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\nCsvToBean csv = new CsvToBean();\n"},{"code":null,"e":31807,"s":31653,"text":"Call parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\n"},{"code":null,"e":31852,"s":31807,"text":"List list = csv.parse(strategy, csvReader);\n"},{"code":null,"e":31988,"s":31852,"text":"Example: Let’ s convert csv file containing Student data to Student objects having attribute Name, RollNo, Department, Result, Pointer."},{"code":null,"e":32226,"s":31988,"text":"StudentData.csv:\n\nname, rollno, department, result, cgpa\namar, 42, cse, pass, 8.6\nrohini, 21, ece, fail, 3.2\naman, 23, cse, pass, 8.9\nrahul, 45, ee, fail, 4.6\npratik, 65, cse, pass, 7.2\nraunak, 23, me, pass, 9.1\nsuvam, 68, me, pass, 8.2\n"},{"code":null,"e":32379,"s":32226,"text":"First create a Student Class with Attributes Name, RollNo, Department, Result, Pointer. Then Create a main class which map csv data to JavaBeans object."},{"code":null,"e":32389,"s":32379,"text":"Programs:"},{"code":null,"e":35114,"s":32389,"text":"Student.javapublic class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \"Student [Name=\" + Name + \", RollNo=\" + RollNo + \", Department = \" + Department + \", Result = \" + Result + \", Pointer=\" + Pointer + \"]\"; }}csvtobean.javaimport java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\"name\", \"Name\"); mapping.put(\"rollno\", \"RollNo\"); mapping.put(\"department\", \"Department\"); mapping.put(\"result\", \"Result\"); mapping.put(\"cgpa\", \"Pointer\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\"D:\\\\EclipseWorkSpace\\\\CSVOperations\\\\StudentData.csv\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}"},{"code":null,"e":36234,"s":35114,"text":"Student.javapublic class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \"Student [Name=\" + Name + \", RollNo=\" + RollNo + \", Department = \" + Department + \", Result = \" + Result + \", Pointer=\" + Pointer + \"]\"; }}"},{"code":"public class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \"Student [Name=\" + Name + \", RollNo=\" + RollNo + \", Department = \" + Department + \", Result = \" + Result + \", Pointer=\" + Pointer + \"]\"; }}","e":37342,"s":36234,"text":null},{"code":null,"e":38948,"s":37342,"text":"csvtobean.javaimport java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\"name\", \"Name\"); mapping.put(\"rollno\", \"RollNo\"); mapping.put(\"department\", \"Department\"); mapping.put(\"result\", \"Result\"); mapping.put(\"cgpa\", \"Pointer\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\"D:\\\\EclipseWorkSpace\\\\CSVOperations\\\\StudentData.csv\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}"},{"code":"import java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\"name\", \"Name\"); mapping.put(\"rollno\", \"RollNo\"); mapping.put(\"department\", \"Department\"); mapping.put(\"result\", \"Result\"); mapping.put(\"cgpa\", \"Pointer\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\"D:\\\\EclipseWorkSpace\\\\CSVOperations\\\\StudentData.csv\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}","e":40540,"s":38948,"text":null},{"code":null,"e":40548,"s":40540,"text":"Output:"},{"code":null,"e":41065,"s":40548,"text":"Student [Name=amar, RollNo=42, Department=cse, Result=pass, Pointer=8.6]\nStudent [Name=rohini, RollNo=21, Department=ece, Result=fail, Pointer=3.2]\nStudent [Name=aman, RollNo=23, Department=cse, Result=pass, Pointer=8.9]\nStudent [Name=rahul, RollNo=45, Department=ee, Result=fail, Pointer=4.6]\nStudent [Name=pratik, RollNo=65, Department=cse, Result=pass, Pointer=7.2]\nStudent [Name=raunak, RollNo=23, Department=me, Result=pass, Pointer=9.1]\nStudent [Name=suvam, RollNo=68, Department=me, Result=pass, Pointer=8.2]\n"},{"code":null,"e":41140,"s":41065,"text":"Reference: OpenCSV Documentation, CsvTOBean Documentation, MappingStrategy"},{"code":null,"e":41144,"s":41140,"text":"CSV"},{"code":null,"e":41149,"s":41144,"text":"Java"},{"code":null,"e":41154,"s":41149,"text":"Java"},{"code":null,"e":41252,"s":41154,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":41303,"s":41252,"text":"Object Oriented Programming (OOPs) Concept in Java"},{"code":null,"e":41333,"s":41303,"text":"HashMap in Java with Examples"},{"code":null,"e":41348,"s":41333,"text":"Stream In Java"},{"code":null,"e":41367,"s":41348,"text":"Interfaces in Java"},{"code":null,"e":41398,"s":41367,"text":"How to iterate any Map in Java"},{"code":null,"e":41416,"s":41398,"text":"ArrayList in Java"},{"code":null,"e":41448,"s":41416,"text":"Initialize an ArrayList in Java"},{"code":null,"e":41468,"s":41448,"text":"Stack Class in Java"},{"code":null,"e":41500,"s":41468,"text":"Multidimensional Arrays in Java"}],"string":"[\n {\n \"code\": null,\n \"e\": 25830,\n \"s\": 25802,\n \"text\": \"\\n17 Jul, 2018\"\n },\n {\n \"code\": null,\n \"e\": 26237,\n \"s\": 25830,\n \"text\": \"OpenCSV provides classes to map CSV file to a list of Java-beans. CsvToBean class is used to map CSV data to JavaBeans. The CSV data can be parsed to a bean, but what is required to be done is to define the mapping strategy and pass the strategy to CsvToBean to parse the data into a bean. HeaderColumnNameTranslateMappingStrategy is the mapping strategy which maps the column id to the java bean property.\"\n },\n {\n \"code\": null,\n \"e\": 27624,\n \"s\": 26237,\n \"text\": \"First add OpenCSV to the project.For maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1For Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'You can Download OpenCSV Jar and include in your project class path.Mapping CSV to JavaBeansMapping a CSV to JavaBeans is simple and easy process. Just follow these couple of steps:Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\\n mapping.put(\\\"column id \\\", \\\"javaBeanProperty\\\");\\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\\n new HeaderColumnNameTranslateMappingStrategy();\\n strategy.setType(JavaBeanObject.class);\\n strategy.setColumnMapping(mapping);\\nCreate the object of CSVReade and CsvToBean classString csvFilename = \\\"data.csv\\\";\\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\\nCsvToBean csv = new CsvToBean();\\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\\n\"\n },\n {\n \"code\": null,\n \"e\": 28031,\n \"s\": 27624,\n \"text\": \"First add OpenCSV to the project.For maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1For Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'You can Download OpenCSV Jar and include in your project class path.\"\n },\n {\n \"code\": null,\n \"e\": 28225,\n \"s\": 28031,\n \"text\": \"For maven project, include the OpenCSV maven dependency in pom.xml file. com.opencsv opencsv 4.1\"\n },\n {\n \"code\": \" com.opencsv opencsv 4.1\",\n \"e\": 28347,\n \"s\": 28225,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 28460,\n \"s\": 28347,\n \"text\": \"For Gradle Project, include the OpenCSV dependency.compile group: 'com.opencsv', name: 'opencsv', version: '4.1'\"\n },\n {\n \"code\": null,\n \"e\": 28522,\n \"s\": 28460,\n \"text\": \"compile group: 'com.opencsv', name: 'opencsv', version: '4.1'\"\n },\n {\n \"code\": null,\n \"e\": 28591,\n \"s\": 28522,\n \"text\": \"You can Download OpenCSV Jar and include in your project class path.\"\n },\n {\n \"code\": null,\n \"e\": 29572,\n \"s\": 28591,\n \"text\": \"Mapping CSV to JavaBeansMapping a CSV to JavaBeans is simple and easy process. Just follow these couple of steps:Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\\n mapping.put(\\\"column id \\\", \\\"javaBeanProperty\\\");\\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\\n new HeaderColumnNameTranslateMappingStrategy();\\n strategy.setType(JavaBeanObject.class);\\n strategy.setColumnMapping(mapping);\\nCreate the object of CSVReade and CsvToBean classString csvFilename = \\\"data.csv\\\";\\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\\nCsvToBean csv = new CsvToBean();\\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\\n\"\n },\n {\n \"code\": null,\n \"e\": 30440,\n \"s\": 29572,\n \"text\": \"Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\\n mapping.put(\\\"column id \\\", \\\"javaBeanProperty\\\");\\nThen add all the column id of csv file with their corresponding javabean property.Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\\n new HeaderColumnNameTranslateMappingStrategy();\\n strategy.setType(JavaBeanObject.class);\\n strategy.setColumnMapping(mapping);\\nCreate the object of CSVReade and CsvToBean classString csvFilename = \\\"data.csv\\\";\\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\\nCsvToBean csv = new CsvToBean();\\nCall parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\\n\"\n },\n {\n \"code\": null,\n \"e\": 30678,\n \"s\": 30440,\n \"text\": \"Create a Hashmap with mapping between the column id and bean property.Map mapping = new HashMap();\\n mapping.put(\\\"column id \\\", \\\"javaBeanProperty\\\");\\nThen add all the column id of csv file with their corresponding javabean property.\"\n },\n {\n \"code\": null,\n \"e\": 30764,\n \"s\": 30678,\n \"text\": \"Map mapping = new HashMap();\\n mapping.put(\\\"column id \\\", \\\"javaBeanProperty\\\");\\n\"\n },\n {\n \"code\": null,\n \"e\": 30847,\n \"s\": 30764,\n \"text\": \"Then add all the column id of csv file with their corresponding javabean property.\"\n },\n {\n \"code\": null,\n \"e\": 31144,\n \"s\": 30847,\n \"text\": \"Create HeaderColumnNameTranslateMappingStrategy object pass mapping hashmap to setColumnMapping method.HeaderColumnNameTranslateMappingStrategy strategy =\\n new HeaderColumnNameTranslateMappingStrategy();\\n strategy.setType(JavaBeanObject.class);\\n strategy.setColumnMapping(mapping);\\n\"\n },\n {\n \"code\": null,\n \"e\": 31338,\n \"s\": 31144,\n \"text\": \"HeaderColumnNameTranslateMappingStrategy strategy =\\n new HeaderColumnNameTranslateMappingStrategy();\\n strategy.setType(JavaBeanObject.class);\\n strategy.setColumnMapping(mapping);\\n\"\n },\n {\n \"code\": null,\n \"e\": 31520,\n \"s\": 31338,\n \"text\": \"Create the object of CSVReade and CsvToBean classString csvFilename = \\\"data.csv\\\";\\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\\nCsvToBean csv = new CsvToBean();\\n\"\n },\n {\n \"code\": null,\n \"e\": 31653,\n \"s\": 31520,\n \"text\": \"String csvFilename = \\\"data.csv\\\";\\nCSVReader csvReader = new CSVReader(new FileReader(csvFilename));\\nCsvToBean csv = new CsvToBean();\\n\"\n },\n {\n \"code\": null,\n \"e\": 31807,\n \"s\": 31653,\n \"text\": \"Call parse method of CsvToBean class and pass HeaderColumnNameTranslateMappingStrategy and CSVReader objects.List list = csv.parse(strategy, csvReader);\\n\"\n },\n {\n \"code\": null,\n \"e\": 31852,\n \"s\": 31807,\n \"text\": \"List list = csv.parse(strategy, csvReader);\\n\"\n },\n {\n \"code\": null,\n \"e\": 31988,\n \"s\": 31852,\n \"text\": \"Example: Let’ s convert csv file containing Student data to Student objects having attribute Name, RollNo, Department, Result, Pointer.\"\n },\n {\n \"code\": null,\n \"e\": 32226,\n \"s\": 31988,\n \"text\": \"StudentData.csv:\\n\\nname, rollno, department, result, cgpa\\namar, 42, cse, pass, 8.6\\nrohini, 21, ece, fail, 3.2\\naman, 23, cse, pass, 8.9\\nrahul, 45, ee, fail, 4.6\\npratik, 65, cse, pass, 7.2\\nraunak, 23, me, pass, 9.1\\nsuvam, 68, me, pass, 8.2\\n\"\n },\n {\n \"code\": null,\n \"e\": 32379,\n \"s\": 32226,\n \"text\": \"First create a Student Class with Attributes Name, RollNo, Department, Result, Pointer. Then Create a main class which map csv data to JavaBeans object.\"\n },\n {\n \"code\": null,\n \"e\": 32389,\n \"s\": 32379,\n \"text\": \"Programs:\"\n },\n {\n \"code\": null,\n \"e\": 35114,\n \"s\": 32389,\n \"text\": \"Student.javapublic class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \\\"Student [Name=\\\" + Name + \\\", RollNo=\\\" + RollNo + \\\", Department = \\\" + Department + \\\", Result = \\\" + Result + \\\", Pointer=\\\" + Pointer + \\\"]\\\"; }}csvtobean.javaimport java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\\\"name\\\", \\\"Name\\\"); mapping.put(\\\"rollno\\\", \\\"RollNo\\\"); mapping.put(\\\"department\\\", \\\"Department\\\"); mapping.put(\\\"result\\\", \\\"Result\\\"); mapping.put(\\\"cgpa\\\", \\\"Pointer\\\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\\\"D:\\\\\\\\EclipseWorkSpace\\\\\\\\CSVOperations\\\\\\\\StudentData.csv\\\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}\"\n },\n {\n \"code\": null,\n \"e\": 36234,\n \"s\": 35114,\n \"text\": \"Student.javapublic class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \\\"Student [Name=\\\" + Name + \\\", RollNo=\\\" + RollNo + \\\", Department = \\\" + Department + \\\", Result = \\\" + Result + \\\", Pointer=\\\" + Pointer + \\\"]\\\"; }}\"\n },\n {\n \"code\": \"public class Student { private static final long serialVersionUID = 1L; public String Name, RollNo, Department, Result, Pointer; public String getName() { return Name; } public void setName(String name) { Name = name; } public String getRollNo() { return RollNo; } public void setRollNo(String rollNo) { RollNo = rollNo; } public String getDepartment() { return Department; } public void setDepartment(String department) { Department = department; } public String getResult() { return Result; } public void setResult(String result) { Result = result; } public String getPointer() { return Pointer; } public void setPointer(String pointer) { Pointer = pointer; } @Override public String toString() { return \\\"Student [Name=\\\" + Name + \\\", RollNo=\\\" + RollNo + \\\", Department = \\\" + Department + \\\", Result = \\\" + Result + \\\", Pointer=\\\" + Pointer + \\\"]\\\"; }}\",\n \"e\": 37342,\n \"s\": 36234,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 38948,\n \"s\": 37342,\n \"text\": \"csvtobean.javaimport java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\\\"name\\\", \\\"Name\\\"); mapping.put(\\\"rollno\\\", \\\"RollNo\\\"); mapping.put(\\\"department\\\", \\\"Department\\\"); mapping.put(\\\"result\\\", \\\"Result\\\"); mapping.put(\\\"cgpa\\\", \\\"Pointer\\\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\\\"D:\\\\\\\\EclipseWorkSpace\\\\\\\\CSVOperations\\\\\\\\StudentData.csv\\\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}\"\n },\n {\n \"code\": \"import java.io.*;import java.util.*; import com.opencsv.CSVReader;import com.opencsv.bean.CsvToBean;import com.opencsv.bean.HeaderColumnNameTranslateMappingStrategy; public class csvtobean { public static void main(String[] args) { // Hashmap to map CSV data to // Bean attributes. Map mapping = new HashMap(); mapping.put(\\\"name\\\", \\\"Name\\\"); mapping.put(\\\"rollno\\\", \\\"RollNo\\\"); mapping.put(\\\"department\\\", \\\"Department\\\"); mapping.put(\\\"result\\\", \\\"Result\\\"); mapping.put(\\\"cgpa\\\", \\\"Pointer\\\"); // HeaderColumnNameTranslateMappingStrategy // for Student class HeaderColumnNameTranslateMappingStrategy strategy = new HeaderColumnNameTranslateMappingStrategy(); strategy.setType(Student.class); strategy.setColumnMapping(mapping); // Create castobaen and csvreader object CSVReader csvReader = null; try { csvReader = new CSVReader(new FileReader (\\\"D:\\\\\\\\EclipseWorkSpace\\\\\\\\CSVOperations\\\\\\\\StudentData.csv\\\")); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } CsvToBean csvToBean = new CsvToBean(); // call the parse method of CsvToBean // pass strategy, csvReader to parse method List list = csvToBean.parse(strategy, csvReader); // print details of Bean object for (Student e : list) { System.out.println(e); } }}\",\n \"e\": 40540,\n \"s\": 38948,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 40548,\n \"s\": 40540,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 41065,\n \"s\": 40548,\n \"text\": \"Student [Name=amar, RollNo=42, Department=cse, Result=pass, Pointer=8.6]\\nStudent [Name=rohini, RollNo=21, Department=ece, Result=fail, Pointer=3.2]\\nStudent [Name=aman, RollNo=23, Department=cse, Result=pass, Pointer=8.9]\\nStudent [Name=rahul, RollNo=45, Department=ee, Result=fail, Pointer=4.6]\\nStudent [Name=pratik, RollNo=65, Department=cse, Result=pass, Pointer=7.2]\\nStudent [Name=raunak, RollNo=23, Department=me, Result=pass, Pointer=9.1]\\nStudent [Name=suvam, RollNo=68, Department=me, Result=pass, Pointer=8.2]\\n\"\n },\n {\n \"code\": null,\n \"e\": 41140,\n \"s\": 41065,\n \"text\": \"Reference: OpenCSV Documentation, CsvTOBean Documentation, MappingStrategy\"\n },\n {\n \"code\": null,\n \"e\": 41144,\n \"s\": 41140,\n \"text\": \"CSV\"\n },\n {\n \"code\": null,\n \"e\": 41149,\n \"s\": 41144,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 41154,\n \"s\": 41149,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 41252,\n \"s\": 41154,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 41303,\n \"s\": 41252,\n \"text\": \"Object Oriented Programming (OOPs) Concept in Java\"\n },\n {\n \"code\": null,\n \"e\": 41333,\n \"s\": 41303,\n \"text\": \"HashMap in Java with Examples\"\n },\n {\n \"code\": null,\n \"e\": 41348,\n \"s\": 41333,\n \"text\": \"Stream In Java\"\n },\n {\n \"code\": null,\n \"e\": 41367,\n \"s\": 41348,\n \"text\": \"Interfaces in Java\"\n },\n {\n \"code\": null,\n \"e\": 41398,\n \"s\": 41367,\n \"text\": \"How to iterate any Map in Java\"\n },\n {\n \"code\": null,\n \"e\": 41416,\n \"s\": 41398,\n \"text\": \"ArrayList in Java\"\n },\n {\n \"code\": null,\n \"e\": 41448,\n \"s\": 41416,\n \"text\": \"Initialize an ArrayList in Java\"\n },\n {\n \"code\": null,\n \"e\": 41468,\n \"s\": 41448,\n \"text\": \"Stack Class in Java\"\n },\n {\n \"code\": null,\n \"e\": 41500,\n \"s\": 41468,\n \"text\": \"Multidimensional Arrays in Java\"\n }\n]"}}},{"rowIdx":533,"cells":{"title":{"kind":"string","value":"Python | Decision tree implementation - GeeksforGeeks"},"text":{"kind":"string","value":"26 Apr, 2022\nPrerequisites: Decision Tree, DecisionTreeClassifier, sklearn, numpy, pandas\nDecision Tree is one of the most powerful and popular algorithm. Decision-tree algorithm falls under the category of supervised learning algorithms. It works for both continuous as well as categorical output variables.\nIn this article, We are going to implement a Decision tree algorithm on the Balance Scale Weight & Distance Database presented on the UCI.\nTitle : Balance Scale Weight & Distance Database\nNumber of Instances : 625 (49 balanced, 288 left, 288 right)\nNumber of Attributes : 4 (numeric) + class name = 5\nAttribute Information:\nClass Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]Left-Weight: 5 (1, 2, 3, 4, 5)Left-Distance: 5 (1, 2, 3, 4, 5)Right-Weight: 5 (1, 2, 3, 4, 5)Right-Distance: 5 (1, 2, 3, 4, 5)\nMissing Attribute Values: None\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\nYou can find more details of the dataset here.\nClass Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]Left-Weight: 5 (1, 2, 3, 4, 5)Left-Distance: 5 (1, 2, 3, 4, 5)Right-Weight: 5 (1, 2, 3, 4, 5)Right-Distance: 5 (1, 2, 3, 4, 5)\nMissing Attribute Values: None\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\nYou can find more details of the dataset here.\nClass Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]\nL [balance scale tip to the left]\nB [balance scale be balanced]\nR [balance scale tip to the right]\nLeft-Weight: 5 (1, 2, 3, 4, 5)\nLeft-Distance: 5 (1, 2, 3, 4, 5)\nRight-Weight: 5 (1, 2, 3, 4, 5)\nRight-Distance: 5 (1, 2, 3, 4, 5)\nMissing Attribute Values: None\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\nYou can find more details of the dataset here.\n46.08 percent are L07.84 percent are B46.08 percent are R\n46.08 percent are L\n07.84 percent are B\n46.08 percent are R\nsklearn :In python, sklearn is a machine learning package which include a lot of ML algorithms.Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score.NumPy :It is a numeric python module which provides fast maths functions for calculations.It is used to read data in numpy arrays and for manipulation purpose.Pandas :Used to read and write different files.Data manipulation can be done easily with dataframes.\nsklearn :In python, sklearn is a machine learning package which include a lot of ML algorithms.Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score.\nIn python, sklearn is a machine learning package which include a lot of ML algorithms.\nHere, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score.\nNumPy :It is a numeric python module which provides fast maths functions for calculations.It is used to read data in numpy arrays and for manipulation purpose.\nIt is a numeric python module which provides fast maths functions for calculations.\nIt is used to read data in numpy arrays and for manipulation purpose.\nPandas :Used to read and write different files.Data manipulation can be done easily with dataframes.\nUsed to read and write different files.\nData manipulation can be done easily with dataframes.\nIn Python, sklearn is the package which contains all the required packages to implement Machine learning algorithm. You can install the sklearn package by following the commands given below.using pip :\npip install -U scikit-learn\nBefore using the above command make sure you have scipy and numpy packages installed.\nIf you don’t have pip. You can install it using\npython get-pip.py\nusing conda :\nconda install scikit-learn\nAt the beginning, we consider the whole training set as the root.\nAttributes are assumed to be categorical for information gain and for gini index, attributes are assumed to be continuous.\nOn the basis of attribute values records are distributed recursively.\nWe use statistical methods for ordering attributes as root or internal node.Pseudocode :Find the best attribute and place it on the root node of the tree.Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.Find leaf nodes in all branches by repeating 1 and 2 on each subset.While implementing the decision tree we will go through the following two phases:Building PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier.Operational PhaseMake predictions.Calculate the accuracy.Data Import :\nFind the best attribute and place it on the root node of the tree.Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.Find leaf nodes in all branches by repeating 1 and 2 on each subset.\nFind the best attribute and place it on the root node of the tree.\nNow, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.\nFind leaf nodes in all branches by repeating 1 and 2 on each subset.\nWhile implementing the decision tree we will go through the following two phases:\nBuilding PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier.Operational PhaseMake predictions.Calculate the accuracy.\nBuilding PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier.\nPreprocess the dataset.\nSplit the dataset from train and test using Python sklearn package.\nTrain the classifier.\nOperational PhaseMake predictions.Calculate the accuracy.\nMake predictions.\nCalculate the accuracy.\nTo import and manipulate the data we are using the pandas package provided in python.\nHere, we are using a URL which is directly fetching the dataset from the UCI site no need to download the dataset. When you try to run this code on your system make sure the system should have an active Internet connection.\nAs the dataset is separated by “,” so we have to pass the sep parameter’s value as “,”.\nAnother thing is notice is that the dataset doesn’t contain the header so we will pass the Header parameter’s value as none. If we will not pass the header parameter then it will consider the first line of the dataset as the header.Data Slicing :\nBefore training the model we have to split the dataset into the training and testing dataset.\nTo split the dataset for training and testing we are using the sklearn module train_test_split\nFirst of all we have to separate the target variable from the attributes in the dataset.X = balance_data.values[:, 1:5]\nY = balance_data.values[:,0]\n\nX = balance_data.values[:, 1:5]\nY = balance_data.values[:,0]\n\nAbove are the lines from the code which separate the dataset. The variable X contains the attributes while the variable Y contains the target variable of the dataset.\nNext step is to split the dataset for training and testing purpose.X_train, X_test, y_train, y_test = train_test_split( \n X, Y, test_size = 0.3, random_state = 100)\nX_train, X_test, y_train, y_test = train_test_split( \n X, Y, test_size = 0.3, random_state = 100)\nAbove line split the dataset for training and testing. As we are splitting the dataset in a ratio of 70:30 between training and testing so we are pass test_size parameter’s value as 0.3.\nrandom_state variable is a pseudo-random number generator state used for random sampling.Terms used in code :Gini index and information gain both of these methods are used to select from the n attributes of the dataset which attribute would be placed at the root node or the internal node.Gini index:\nGini index and information gain both of these methods are used to select from the n attributes of the dataset which attribute would be placed at the root node or the internal node.\nGini index:\nGini Index is a metric to measure how often a randomly chosen element would be incorrectly identified.\nIt means an attribute with lower gini index should be preferred.\nSklearn supports “gini” criteria for Gini Index and by default, it takes “gini” value.Entropy: \nEntropy:\n \nEntropy is the measure of uncertainty of a random variable, it characterizes the impurity of an arbitrary collection of examples. The higher the entropy the more the information content.Information Gain\nInformation Gain\nThe entropy typically changes when we use a node in a decision tree to partition the training instances into smaller subsets. Information gain is a measure of this change in entropy.\nSklearn supports “entropy” criteria for Information Gain and if we want to use Information Gain method in sklearn then we have to mention it explicitly.Accuracy score\nAccuracy score\nAccuracy score is used to calculate the accuracy of the trained classifier.Confusion Matrix\nConfusion Matrix\nConfusion Matrix is used to understand the trained classifier behavior over the test dataset or validate dataset.Recommended: Please try your approach on {IDE} first, before moving on to the solution.Below is the python code for the decision tree.# Run this program on your local python# interpreter, provided you have installed# the required libraries. # Importing the required packagesimport numpy as npimport pandas as pdfrom sklearn.metrics import confusion_matrixfrom sklearn.model_selection import train_test_splitfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.metrics import accuracy_scorefrom sklearn.metrics import classification_report # Function importing Datasetdef importdata(): balance_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-'+'databases/balance-scale/balance-scale.data', sep= ',', header = None) # Printing the dataswet shape print (\"Dataset Length: \", len(balance_data)) print (\"Dataset Shape: \", balance_data.shape) # Printing the dataset obseravtions print (\"Dataset: \",balance_data.head()) return balance_data # Function to split the datasetdef splitdataset(balance_data): # Separating the target variable X = balance_data.values[:, 1:5] Y = balance_data.values[:, 0] # Splitting the dataset into train and test X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100) return X, Y, X_train, X_test, y_train, y_test # Function to perform training with giniIndex.def train_using_gini(X_train, X_test, y_train): # Creating the classifier object clf_gini = DecisionTreeClassifier(criterion = \"gini\", random_state = 100,max_depth=3, min_samples_leaf=5) # Performing training clf_gini.fit(X_train, y_train) return clf_gini # Function to perform training with entropy.def tarin_using_entropy(X_train, X_test, y_train): # Decision tree with entropy clf_entropy = DecisionTreeClassifier( criterion = \"entropy\", random_state = 100, max_depth = 3, min_samples_leaf = 5) # Performing training clf_entropy.fit(X_train, y_train) return clf_entropy # Function to make predictionsdef prediction(X_test, clf_object): # Predicton on test with giniIndex y_pred = clf_object.predict(X_test) print(\"Predicted values:\") print(y_pred) return y_pred # Function to calculate accuracydef cal_accuracy(y_test, y_pred): print(\"Confusion Matrix: \", confusion_matrix(y_test, y_pred)) print (\"Accuracy : \", accuracy_score(y_test,y_pred)*100) print(\"Report : \", classification_report(y_test, y_pred)) # Driver codedef main(): # Building Phase data = importdata() X, Y, X_train, X_test, y_train, y_test = splitdataset(data) clf_gini = train_using_gini(X_train, X_test, y_train) clf_entropy = tarin_using_entropy(X_train, X_test, y_train) # Operational Phase print(\"Results Using Gini Index:\") # Prediction using gini y_pred_gini = prediction(X_test, clf_gini) cal_accuracy(y_test, y_pred_gini) print(\"Results Using Entropy:\") # Prediction using entropy y_pred_entropy = prediction(X_test, clf_entropy) cal_accuracy(y_test, y_pred_entropy) # Calling main functionif __name__==\"__main__\": main()Data Infomation:\nDataset Length: 625\nDataset Shape: (625, 5)\nDataset: 0 1 2 3 4\n0 B 1 1 1 1\n1 R 1 1 1 2\n2 R 1 1 1 3\n3 R 1 1 1 4\n4 R 1 1 1 5\nResults Using Gini Index:\nPredicted values:\n['R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'L'\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L'\n 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'L' 'R'\n 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L'\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\n 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\n 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R'\n 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'R']\n\nConfusion Matrix: [[ 0 6 7]\n [ 0 67 18]\n [ 0 19 71]]\nAccuracy : 73.4042553191\nReport : \n precision recall f1-score support\n B 0.00 0.00 0.00 13\n L 0.73 0.79 0.76 85\n R 0.74 0.79 0.76 90\navg/total 0.68 0.73 0.71 188\n\nResults Using Entropy:\nPredicted values:\n['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L'\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L'\n 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L'\n 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L'\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\n 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']\n\nConfusion Matrix: [[ 0 6 7]\n [ 0 63 22]\n [ 0 20 70]]\nAccuracy : 70.7446808511\nReport : \n precision recall f1-score support\n B 0.00 0.00 0.00 13\n L 0.71 0.74 0.72 85\n R 0.71 0.78 0.74 90\navg / total 0.66 0.71 0.68 188My Personal Notes\narrow_drop_upSave\nBelow is the python code for the decision tree.\n# Run this program on your local python# interpreter, provided you have installed# the required libraries. # Importing the required packagesimport numpy as npimport pandas as pdfrom sklearn.metrics import confusion_matrixfrom sklearn.model_selection import train_test_splitfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.metrics import accuracy_scorefrom sklearn.metrics import classification_report # Function importing Datasetdef importdata(): balance_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-'+'databases/balance-scale/balance-scale.data', sep= ',', header = None) # Printing the dataswet shape print (\"Dataset Length: \", len(balance_data)) print (\"Dataset Shape: \", balance_data.shape) # Printing the dataset obseravtions print (\"Dataset: \",balance_data.head()) return balance_data # Function to split the datasetdef splitdataset(balance_data): # Separating the target variable X = balance_data.values[:, 1:5] Y = balance_data.values[:, 0] # Splitting the dataset into train and test X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100) return X, Y, X_train, X_test, y_train, y_test # Function to perform training with giniIndex.def train_using_gini(X_train, X_test, y_train): # Creating the classifier object clf_gini = DecisionTreeClassifier(criterion = \"gini\", random_state = 100,max_depth=3, min_samples_leaf=5) # Performing training clf_gini.fit(X_train, y_train) return clf_gini # Function to perform training with entropy.def tarin_using_entropy(X_train, X_test, y_train): # Decision tree with entropy clf_entropy = DecisionTreeClassifier( criterion = \"entropy\", random_state = 100, max_depth = 3, min_samples_leaf = 5) # Performing training clf_entropy.fit(X_train, y_train) return clf_entropy # Function to make predictionsdef prediction(X_test, clf_object): # Predicton on test with giniIndex y_pred = clf_object.predict(X_test) print(\"Predicted values:\") print(y_pred) return y_pred # Function to calculate accuracydef cal_accuracy(y_test, y_pred): print(\"Confusion Matrix: \", confusion_matrix(y_test, y_pred)) print (\"Accuracy : \", accuracy_score(y_test,y_pred)*100) print(\"Report : \", classification_report(y_test, y_pred)) # Driver codedef main(): # Building Phase data = importdata() X, Y, X_train, X_test, y_train, y_test = splitdataset(data) clf_gini = train_using_gini(X_train, X_test, y_train) clf_entropy = tarin_using_entropy(X_train, X_test, y_train) # Operational Phase print(\"Results Using Gini Index:\") # Prediction using gini y_pred_gini = prediction(X_test, clf_gini) cal_accuracy(y_test, y_pred_gini) print(\"Results Using Entropy:\") # Prediction using entropy y_pred_entropy = prediction(X_test, clf_entropy) cal_accuracy(y_test, y_pred_entropy) # Calling main functionif __name__==\"__main__\": main()\nData Infomation:\nDataset Length: 625\nDataset Shape: (625, 5)\nDataset: 0 1 2 3 4\n0 B 1 1 1 1\n1 R 1 1 1 2\n2 R 1 1 1 3\n3 R 1 1 1 4\n4 R 1 1 1 5\nResults Using Gini Index:\nPredicted values:\n['R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'L'\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L'\n 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'L' 'R'\n 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L'\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\n 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\n 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R'\n 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'R']\n\nConfusion Matrix: [[ 0 6 7]\n [ 0 67 18]\n [ 0 19 71]]\nAccuracy : 73.4042553191\nReport : \n precision recall f1-score support\n B 0.00 0.00 0.00 13\n L 0.73 0.79 0.76 85\n R 0.74 0.79 0.76 90\navg/total 0.68 0.73 0.71 188\n\nResults Using Entropy:\nPredicted values:\n['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L'\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L'\n 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L'\n 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L'\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\n 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']\n\nConfusion Matrix: [[ 0 6 7]\n [ 0 63 22]\n [ 0 20 70]]\nAccuracy : 70.7446808511\nReport : \n precision recall f1-score support\n B 0.00 0.00 0.00 13\n L 0.71 0.74 0.72 85\n R 0.71 0.78 0.74 90\navg / total 0.66 0.71 0.68 188\nData Infomation:\nResults Using Gini Index:\nResults Using Entropy:\nshubham_singh\nknbarnwal\nkhyatichat23\nAdvanced Computer Subject\nMachine Learning\nTechnical Scripter\nMachine Learning\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nSystem Design Tutorial\nCopying Files to and from Docker Containers\nML | Underfitting and Overfitting\nClustering in Machine Learning\nDocker - COPY Instruction\nAgents in Artificial Intelligence\nActivation functions in Neural Networks\nIntroduction to Recurrent Neural Network\nSupport Vector Machine Algorithm\nML | Underfitting and Overfitting"},"parsed":{"kind":"list like","value":[{"code":null,"e":25471,"s":25443,"text":"\n26 Apr, 2022"},{"code":null,"e":25548,"s":25471,"text":"Prerequisites: Decision Tree, DecisionTreeClassifier, sklearn, numpy, pandas"},{"code":null,"e":25767,"s":25548,"text":"Decision Tree is one of the most powerful and popular algorithm. Decision-tree algorithm falls under the category of supervised learning algorithms. It works for both continuous as well as categorical output variables."},{"code":null,"e":25906,"s":25767,"text":"In this article, We are going to implement a Decision tree algorithm on the Balance Scale Weight & Distance Database presented on the UCI."},{"code":null,"e":26516,"s":25906,"text":"Title : Balance Scale Weight & Distance Database\nNumber of Instances : 625 (49 balanced, 288 left, 288 right)\nNumber of Attributes : 4 (numeric) + class name = 5\nAttribute Information:\nClass Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]Left-Weight: 5 (1, 2, 3, 4, 5)Left-Distance: 5 (1, 2, 3, 4, 5)Right-Weight: 5 (1, 2, 3, 4, 5)Right-Distance: 5 (1, 2, 3, 4, 5)\nMissing Attribute Values: None\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\nYou can find more details of the dataset here."},{"code":null,"e":26925,"s":26516,"text":"Class Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]Left-Weight: 5 (1, 2, 3, 4, 5)Left-Distance: 5 (1, 2, 3, 4, 5)Right-Weight: 5 (1, 2, 3, 4, 5)Right-Distance: 5 (1, 2, 3, 4, 5)\nMissing Attribute Values: None\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\nYou can find more details of the dataset here."},{"code":null,"e":27053,"s":26925,"text":"Class Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]"},{"code":null,"e":27087,"s":27053,"text":"L [balance scale tip to the left]"},{"code":null,"e":27117,"s":27087,"text":"B [balance scale be balanced]"},{"code":null,"e":27152,"s":27117,"text":"R [balance scale tip to the right]"},{"code":null,"e":27183,"s":27152,"text":"Left-Weight: 5 (1, 2, 3, 4, 5)"},{"code":null,"e":27216,"s":27183,"text":"Left-Distance: 5 (1, 2, 3, 4, 5)"},{"code":null,"e":27248,"s":27216,"text":"Right-Weight: 5 (1, 2, 3, 4, 5)"},{"code":null,"e":27437,"s":27248,"text":"Right-Distance: 5 (1, 2, 3, 4, 5)\nMissing Attribute Values: None\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\nYou can find more details of the dataset here."},{"code":null,"e":27495,"s":27437,"text":"46.08 percent are L07.84 percent are B46.08 percent are R"},{"code":null,"e":27515,"s":27495,"text":"46.08 percent are L"},{"code":null,"e":27535,"s":27515,"text":"07.84 percent are B"},{"code":null,"e":27555,"s":27535,"text":"46.08 percent are R"},{"code":null,"e":28014,"s":27555,"text":"sklearn :In python, sklearn is a machine learning package which include a lot of ML algorithms.Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score.NumPy :It is a numeric python module which provides fast maths functions for calculations.It is used to read data in numpy arrays and for manipulation purpose.Pandas :Used to read and write different files.Data manipulation can be done easily with dataframes."},{"code":null,"e":28214,"s":28014,"text":"sklearn :In python, sklearn is a machine learning package which include a lot of ML algorithms.Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score."},{"code":null,"e":28301,"s":28214,"text":"In python, sklearn is a machine learning package which include a lot of ML algorithms."},{"code":null,"e":28406,"s":28301,"text":"Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score."},{"code":null,"e":28566,"s":28406,"text":"NumPy :It is a numeric python module which provides fast maths functions for calculations.It is used to read data in numpy arrays and for manipulation purpose."},{"code":null,"e":28650,"s":28566,"text":"It is a numeric python module which provides fast maths functions for calculations."},{"code":null,"e":28720,"s":28650,"text":"It is used to read data in numpy arrays and for manipulation purpose."},{"code":null,"e":28821,"s":28720,"text":"Pandas :Used to read and write different files.Data manipulation can be done easily with dataframes."},{"code":null,"e":28861,"s":28821,"text":"Used to read and write different files."},{"code":null,"e":28915,"s":28861,"text":"Data manipulation can be done easily with dataframes."},{"code":null,"e":29117,"s":28915,"text":"In Python, sklearn is the package which contains all the required packages to implement Machine learning algorithm. You can install the sklearn package by following the commands given below.using pip :"},{"code":null,"e":29145,"s":29117,"text":"pip install -U scikit-learn"},{"code":null,"e":29231,"s":29145,"text":"Before using the above command make sure you have scipy and numpy packages installed."},{"code":null,"e":29279,"s":29231,"text":"If you don’t have pip. You can install it using"},{"code":null,"e":29297,"s":29279,"text":"python get-pip.py"},{"code":null,"e":29311,"s":29297,"text":"using conda :"},{"code":null,"e":29338,"s":29311,"text":"conda install scikit-learn"},{"code":null,"e":29404,"s":29338,"text":"At the beginning, we consider the whole training set as the root."},{"code":null,"e":29527,"s":29404,"text":"Attributes are assumed to be categorical for information gain and for gini index, attributes are assumed to be continuous."},{"code":null,"e":29597,"s":29527,"text":"On the basis of attribute values records are distributed recursively."},{"code":null,"e":30268,"s":29597,"text":"We use statistical methods for ordering attributes as root or internal node.Pseudocode :Find the best attribute and place it on the root node of the tree.Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.Find leaf nodes in all branches by repeating 1 and 2 on each subset.While implementing the decision tree we will go through the following two phases:Building PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier.Operational PhaseMake predictions.Calculate the accuracy.Data Import :"},{"code":null,"e":30575,"s":30268,"text":"Find the best attribute and place it on the root node of the tree.Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.Find leaf nodes in all branches by repeating 1 and 2 on each subset."},{"code":null,"e":30642,"s":30575,"text":"Find the best attribute and place it on the root node of the tree."},{"code":null,"e":30815,"s":30642,"text":"Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute."},{"code":null,"e":30884,"s":30815,"text":"Find leaf nodes in all branches by repeating 1 and 2 on each subset."},{"code":null,"e":30966,"s":30884,"text":"While implementing the decision tree we will go through the following two phases:"},{"code":null,"e":31149,"s":30966,"text":"Building PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier.Operational PhaseMake predictions.Calculate the accuracy."},{"code":null,"e":31275,"s":31149,"text":"Building PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier."},{"code":null,"e":31299,"s":31275,"text":"Preprocess the dataset."},{"code":null,"e":31367,"s":31299,"text":"Split the dataset from train and test using Python sklearn package."},{"code":null,"e":31389,"s":31367,"text":"Train the classifier."},{"code":null,"e":31447,"s":31389,"text":"Operational PhaseMake predictions.Calculate the accuracy."},{"code":null,"e":31465,"s":31447,"text":"Make predictions."},{"code":null,"e":31489,"s":31465,"text":"Calculate the accuracy."},{"code":null,"e":31575,"s":31489,"text":"To import and manipulate the data we are using the pandas package provided in python."},{"code":null,"e":31799,"s":31575,"text":"Here, we are using a URL which is directly fetching the dataset from the UCI site no need to download the dataset. When you try to run this code on your system make sure the system should have an active Internet connection."},{"code":null,"e":31887,"s":31799,"text":"As the dataset is separated by “,” so we have to pass the sep parameter’s value as “,”."},{"code":null,"e":32134,"s":31887,"text":"Another thing is notice is that the dataset doesn’t contain the header so we will pass the Header parameter’s value as none. If we will not pass the header parameter then it will consider the first line of the dataset as the header.Data Slicing :"},{"code":null,"e":32228,"s":32134,"text":"Before training the model we have to split the dataset into the training and testing dataset."},{"code":null,"e":32323,"s":32228,"text":"To split the dataset for training and testing we are using the sklearn module train_test_split"},{"code":null,"e":32473,"s":32323,"text":"First of all we have to separate the target variable from the attributes in the dataset.X = balance_data.values[:, 1:5]\nY = balance_data.values[:,0]\n"},{"code":null,"e":32535,"s":32473,"text":"X = balance_data.values[:, 1:5]\nY = balance_data.values[:,0]\n"},{"code":null,"e":32702,"s":32535,"text":"Above are the lines from the code which separate the dataset. The variable X contains the attributes while the variable Y contains the target variable of the dataset."},{"code":null,"e":32876,"s":32702,"text":"Next step is to split the dataset for training and testing purpose.X_train, X_test, y_train, y_test = train_test_split( \n X, Y, test_size = 0.3, random_state = 100)"},{"code":null,"e":32983,"s":32876,"text":"X_train, X_test, y_train, y_test = train_test_split( \n X, Y, test_size = 0.3, random_state = 100)"},{"code":null,"e":33170,"s":32983,"text":"Above line split the dataset for training and testing. As we are splitting the dataset in a ratio of 70:30 between training and testing so we are pass test_size parameter’s value as 0.3."},{"code":null,"e":33471,"s":33170,"text":"random_state variable is a pseudo-random number generator state used for random sampling.Terms used in code :Gini index and information gain both of these methods are used to select from the n attributes of the dataset which attribute would be placed at the root node or the internal node.Gini index:"},{"code":null,"e":33652,"s":33471,"text":"Gini index and information gain both of these methods are used to select from the n attributes of the dataset which attribute would be placed at the root node or the internal node."},{"code":null,"e":33664,"s":33652,"text":"Gini index:"},{"code":null,"e":33767,"s":33664,"text":"Gini Index is a metric to measure how often a randomly chosen element would be incorrectly identified."},{"code":null,"e":33832,"s":33767,"text":"It means an attribute with lower gini index should be preferred."},{"code":null,"e":33931,"s":33832,"text":"Sklearn supports “gini” criteria for Gini Index and by default, it takes “gini” value.Entropy: "},{"code":null,"e":33940,"s":33931,"text":"Entropy:"},{"code":null,"e":34148,"s":33945,"text":"Entropy is the measure of uncertainty of a random variable, it characterizes the impurity of an arbitrary collection of examples. The higher the entropy the more the information content.Information Gain"},{"code":null,"e":34165,"s":34148,"text":"Information Gain"},{"code":null,"e":34348,"s":34165,"text":"The entropy typically changes when we use a node in a decision tree to partition the training instances into smaller subsets. Information gain is a measure of this change in entropy."},{"code":null,"e":34515,"s":34348,"text":"Sklearn supports “entropy” criteria for Information Gain and if we want to use Information Gain method in sklearn then we have to mention it explicitly.Accuracy score"},{"code":null,"e":34530,"s":34515,"text":"Accuracy score"},{"code":null,"e":34622,"s":34530,"text":"Accuracy score is used to calculate the accuracy of the trained classifier.Confusion Matrix"},{"code":null,"e":34639,"s":34622,"text":"Confusion Matrix"},{"code":null,"e":40563,"s":34639,"text":"Confusion Matrix is used to understand the trained classifier behavior over the test dataset or validate dataset.Recommended: Please try your approach on {IDE} first, before moving on to the solution.Below is the python code for the decision tree.# Run this program on your local python# interpreter, provided you have installed# the required libraries. # Importing the required packagesimport numpy as npimport pandas as pdfrom sklearn.metrics import confusion_matrixfrom sklearn.model_selection import train_test_splitfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.metrics import accuracy_scorefrom sklearn.metrics import classification_report # Function importing Datasetdef importdata(): balance_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-'+'databases/balance-scale/balance-scale.data', sep= ',', header = None) # Printing the dataswet shape print (\"Dataset Length: \", len(balance_data)) print (\"Dataset Shape: \", balance_data.shape) # Printing the dataset obseravtions print (\"Dataset: \",balance_data.head()) return balance_data # Function to split the datasetdef splitdataset(balance_data): # Separating the target variable X = balance_data.values[:, 1:5] Y = balance_data.values[:, 0] # Splitting the dataset into train and test X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100) return X, Y, X_train, X_test, y_train, y_test # Function to perform training with giniIndex.def train_using_gini(X_train, X_test, y_train): # Creating the classifier object clf_gini = DecisionTreeClassifier(criterion = \"gini\", random_state = 100,max_depth=3, min_samples_leaf=5) # Performing training clf_gini.fit(X_train, y_train) return clf_gini # Function to perform training with entropy.def tarin_using_entropy(X_train, X_test, y_train): # Decision tree with entropy clf_entropy = DecisionTreeClassifier( criterion = \"entropy\", random_state = 100, max_depth = 3, min_samples_leaf = 5) # Performing training clf_entropy.fit(X_train, y_train) return clf_entropy # Function to make predictionsdef prediction(X_test, clf_object): # Predicton on test with giniIndex y_pred = clf_object.predict(X_test) print(\"Predicted values:\") print(y_pred) return y_pred # Function to calculate accuracydef cal_accuracy(y_test, y_pred): print(\"Confusion Matrix: \", confusion_matrix(y_test, y_pred)) print (\"Accuracy : \", accuracy_score(y_test,y_pred)*100) print(\"Report : \", classification_report(y_test, y_pred)) # Driver codedef main(): # Building Phase data = importdata() X, Y, X_train, X_test, y_train, y_test = splitdataset(data) clf_gini = train_using_gini(X_train, X_test, y_train) clf_entropy = tarin_using_entropy(X_train, X_test, y_train) # Operational Phase print(\"Results Using Gini Index:\") # Prediction using gini y_pred_gini = prediction(X_test, clf_gini) cal_accuracy(y_test, y_pred_gini) print(\"Results Using Entropy:\") # Prediction using entropy y_pred_entropy = prediction(X_test, clf_entropy) cal_accuracy(y_test, y_pred_entropy) # Calling main functionif __name__==\"__main__\": main()Data Infomation:\nDataset Length: 625\nDataset Shape: (625, 5)\nDataset: 0 1 2 3 4\n0 B 1 1 1 1\n1 R 1 1 1 2\n2 R 1 1 1 3\n3 R 1 1 1 4\n4 R 1 1 1 5\nResults Using Gini Index:\nPredicted values:\n['R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'L'\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L'\n 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'L' 'R'\n 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L'\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\n 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\n 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R'\n 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'R']\n\nConfusion Matrix: [[ 0 6 7]\n [ 0 67 18]\n [ 0 19 71]]\nAccuracy : 73.4042553191\nReport : \n precision recall f1-score support\n B 0.00 0.00 0.00 13\n L 0.73 0.79 0.76 85\n R 0.74 0.79 0.76 90\navg/total 0.68 0.73 0.71 188\n\nResults Using Entropy:\nPredicted values:\n['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L'\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L'\n 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L'\n 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L'\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\n 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']\n\nConfusion Matrix: [[ 0 6 7]\n [ 0 63 22]\n [ 0 20 70]]\nAccuracy : 70.7446808511\nReport : \n precision recall f1-score support\n B 0.00 0.00 0.00 13\n L 0.71 0.74 0.72 85\n R 0.71 0.78 0.74 90\navg / total 0.66 0.71 0.68 188My Personal Notes\narrow_drop_upSave"},{"code":null,"e":40611,"s":40563,"text":"Below is the python code for the decision tree."},{"code":"# Run this program on your local python# interpreter, provided you have installed# the required libraries. # Importing the required packagesimport numpy as npimport pandas as pdfrom sklearn.metrics import confusion_matrixfrom sklearn.model_selection import train_test_splitfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.metrics import accuracy_scorefrom sklearn.metrics import classification_report # Function importing Datasetdef importdata(): balance_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-'+'databases/balance-scale/balance-scale.data', sep= ',', header = None) # Printing the dataswet shape print (\"Dataset Length: \", len(balance_data)) print (\"Dataset Shape: \", balance_data.shape) # Printing the dataset obseravtions print (\"Dataset: \",balance_data.head()) return balance_data # Function to split the datasetdef splitdataset(balance_data): # Separating the target variable X = balance_data.values[:, 1:5] Y = balance_data.values[:, 0] # Splitting the dataset into train and test X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100) return X, Y, X_train, X_test, y_train, y_test # Function to perform training with giniIndex.def train_using_gini(X_train, X_test, y_train): # Creating the classifier object clf_gini = DecisionTreeClassifier(criterion = \"gini\", random_state = 100,max_depth=3, min_samples_leaf=5) # Performing training clf_gini.fit(X_train, y_train) return clf_gini # Function to perform training with entropy.def tarin_using_entropy(X_train, X_test, y_train): # Decision tree with entropy clf_entropy = DecisionTreeClassifier( criterion = \"entropy\", random_state = 100, max_depth = 3, min_samples_leaf = 5) # Performing training clf_entropy.fit(X_train, y_train) return clf_entropy # Function to make predictionsdef prediction(X_test, clf_object): # Predicton on test with giniIndex y_pred = clf_object.predict(X_test) print(\"Predicted values:\") print(y_pred) return y_pred # Function to calculate accuracydef cal_accuracy(y_test, y_pred): print(\"Confusion Matrix: \", confusion_matrix(y_test, y_pred)) print (\"Accuracy : \", accuracy_score(y_test,y_pred)*100) print(\"Report : \", classification_report(y_test, y_pred)) # Driver codedef main(): # Building Phase data = importdata() X, Y, X_train, X_test, y_train, y_test = splitdataset(data) clf_gini = train_using_gini(X_train, X_test, y_train) clf_entropy = tarin_using_entropy(X_train, X_test, y_train) # Operational Phase print(\"Results Using Gini Index:\") # Prediction using gini y_pred_gini = prediction(X_test, clf_gini) cal_accuracy(y_test, y_pred_gini) print(\"Results Using Entropy:\") # Prediction using entropy y_pred_entropy = prediction(X_test, clf_entropy) cal_accuracy(y_test, y_pred_entropy) # Calling main functionif __name__==\"__main__\": main()","e":43724,"s":40611,"text":null},{"code":null,"e":46254,"s":43724,"text":"Data Infomation:\nDataset Length: 625\nDataset Shape: (625, 5)\nDataset: 0 1 2 3 4\n0 B 1 1 1 1\n1 R 1 1 1 2\n2 R 1 1 1 3\n3 R 1 1 1 4\n4 R 1 1 1 5\nResults Using Gini Index:\nPredicted values:\n['R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'L'\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L'\n 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'L' 'R'\n 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L'\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\n 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\n 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R'\n 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'R']\n\nConfusion Matrix: [[ 0 6 7]\n [ 0 67 18]\n [ 0 19 71]]\nAccuracy : 73.4042553191\nReport : \n precision recall f1-score support\n B 0.00 0.00 0.00 13\n L 0.73 0.79 0.76 85\n R 0.74 0.79 0.76 90\navg/total 0.68 0.73 0.71 188\n\nResults Using Entropy:\nPredicted values:\n['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L'\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L'\n 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L'\n 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L'\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\n 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R'\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']\n\nConfusion Matrix: [[ 0 6 7]\n [ 0 63 22]\n [ 0 20 70]]\nAccuracy : 70.7446808511\nReport : \n precision recall f1-score support\n B 0.00 0.00 0.00 13\n L 0.71 0.74 0.72 85\n R 0.71 0.78 0.74 90\navg / total 0.66 0.71 0.68 188"},{"code":null,"e":46271,"s":46254,"text":"Data Infomation:"},{"code":null,"e":46297,"s":46271,"text":"Results Using Gini Index:"},{"code":null,"e":46320,"s":46297,"text":"Results Using Entropy:"},{"code":null,"e":46334,"s":46320,"text":"shubham_singh"},{"code":null,"e":46344,"s":46334,"text":"knbarnwal"},{"code":null,"e":46357,"s":46344,"text":"khyatichat23"},{"code":null,"e":46383,"s":46357,"text":"Advanced Computer Subject"},{"code":null,"e":46400,"s":46383,"text":"Machine Learning"},{"code":null,"e":46419,"s":46400,"text":"Technical Scripter"},{"code":null,"e":46436,"s":46419,"text":"Machine Learning"},{"code":null,"e":46534,"s":46436,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":46557,"s":46534,"text":"System Design Tutorial"},{"code":null,"e":46601,"s":46557,"text":"Copying Files to and from Docker Containers"},{"code":null,"e":46635,"s":46601,"text":"ML | Underfitting and Overfitting"},{"code":null,"e":46666,"s":46635,"text":"Clustering in Machine Learning"},{"code":null,"e":46692,"s":46666,"text":"Docker - COPY Instruction"},{"code":null,"e":46726,"s":46692,"text":"Agents in Artificial Intelligence"},{"code":null,"e":46766,"s":46726,"text":"Activation functions in Neural Networks"},{"code":null,"e":46807,"s":46766,"text":"Introduction to Recurrent Neural Network"},{"code":null,"e":46840,"s":46807,"text":"Support Vector Machine Algorithm"}],"string":"[\n {\n \"code\": null,\n \"e\": 25471,\n \"s\": 25443,\n \"text\": \"\\n26 Apr, 2022\"\n },\n {\n \"code\": null,\n \"e\": 25548,\n \"s\": 25471,\n \"text\": \"Prerequisites: Decision Tree, DecisionTreeClassifier, sklearn, numpy, pandas\"\n },\n {\n \"code\": null,\n \"e\": 25767,\n \"s\": 25548,\n \"text\": \"Decision Tree is one of the most powerful and popular algorithm. Decision-tree algorithm falls under the category of supervised learning algorithms. It works for both continuous as well as categorical output variables.\"\n },\n {\n \"code\": null,\n \"e\": 25906,\n \"s\": 25767,\n \"text\": \"In this article, We are going to implement a Decision tree algorithm on the Balance Scale Weight & Distance Database presented on the UCI.\"\n },\n {\n \"code\": null,\n \"e\": 26516,\n \"s\": 25906,\n \"text\": \"Title : Balance Scale Weight & Distance Database\\nNumber of Instances : 625 (49 balanced, 288 left, 288 right)\\nNumber of Attributes : 4 (numeric) + class name = 5\\nAttribute Information:\\nClass Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]Left-Weight: 5 (1, 2, 3, 4, 5)Left-Distance: 5 (1, 2, 3, 4, 5)Right-Weight: 5 (1, 2, 3, 4, 5)Right-Distance: 5 (1, 2, 3, 4, 5)\\nMissing Attribute Values: None\\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\\nYou can find more details of the dataset here.\"\n },\n {\n \"code\": null,\n \"e\": 26925,\n \"s\": 26516,\n \"text\": \"Class Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]Left-Weight: 5 (1, 2, 3, 4, 5)Left-Distance: 5 (1, 2, 3, 4, 5)Right-Weight: 5 (1, 2, 3, 4, 5)Right-Distance: 5 (1, 2, 3, 4, 5)\\nMissing Attribute Values: None\\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\\nYou can find more details of the dataset here.\"\n },\n {\n \"code\": null,\n \"e\": 27053,\n \"s\": 26925,\n \"text\": \"Class Name (Target variable): 3L [balance scale tip to the left]B [balance scale be balanced]R [balance scale tip to the right]\"\n },\n {\n \"code\": null,\n \"e\": 27087,\n \"s\": 27053,\n \"text\": \"L [balance scale tip to the left]\"\n },\n {\n \"code\": null,\n \"e\": 27117,\n \"s\": 27087,\n \"text\": \"B [balance scale be balanced]\"\n },\n {\n \"code\": null,\n \"e\": 27152,\n \"s\": 27117,\n \"text\": \"R [balance scale tip to the right]\"\n },\n {\n \"code\": null,\n \"e\": 27183,\n \"s\": 27152,\n \"text\": \"Left-Weight: 5 (1, 2, 3, 4, 5)\"\n },\n {\n \"code\": null,\n \"e\": 27216,\n \"s\": 27183,\n \"text\": \"Left-Distance: 5 (1, 2, 3, 4, 5)\"\n },\n {\n \"code\": null,\n \"e\": 27248,\n \"s\": 27216,\n \"text\": \"Right-Weight: 5 (1, 2, 3, 4, 5)\"\n },\n {\n \"code\": null,\n \"e\": 27437,\n \"s\": 27248,\n \"text\": \"Right-Distance: 5 (1, 2, 3, 4, 5)\\nMissing Attribute Values: None\\nClass Distribution:46.08 percent are L07.84 percent are B46.08 percent are R\\nYou can find more details of the dataset here.\"\n },\n {\n \"code\": null,\n \"e\": 27495,\n \"s\": 27437,\n \"text\": \"46.08 percent are L07.84 percent are B46.08 percent are R\"\n },\n {\n \"code\": null,\n \"e\": 27515,\n \"s\": 27495,\n \"text\": \"46.08 percent are L\"\n },\n {\n \"code\": null,\n \"e\": 27535,\n \"s\": 27515,\n \"text\": \"07.84 percent are B\"\n },\n {\n \"code\": null,\n \"e\": 27555,\n \"s\": 27535,\n \"text\": \"46.08 percent are R\"\n },\n {\n \"code\": null,\n \"e\": 28014,\n \"s\": 27555,\n \"text\": \"sklearn :In python, sklearn is a machine learning package which include a lot of ML algorithms.Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score.NumPy :It is a numeric python module which provides fast maths functions for calculations.It is used to read data in numpy arrays and for manipulation purpose.Pandas :Used to read and write different files.Data manipulation can be done easily with dataframes.\"\n },\n {\n \"code\": null,\n \"e\": 28214,\n \"s\": 28014,\n \"text\": \"sklearn :In python, sklearn is a machine learning package which include a lot of ML algorithms.Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score.\"\n },\n {\n \"code\": null,\n \"e\": 28301,\n \"s\": 28214,\n \"text\": \"In python, sklearn is a machine learning package which include a lot of ML algorithms.\"\n },\n {\n \"code\": null,\n \"e\": 28406,\n \"s\": 28301,\n \"text\": \"Here, we are using some of its modules like train_test_split, DecisionTreeClassifier and accuracy_score.\"\n },\n {\n \"code\": null,\n \"e\": 28566,\n \"s\": 28406,\n \"text\": \"NumPy :It is a numeric python module which provides fast maths functions for calculations.It is used to read data in numpy arrays and for manipulation purpose.\"\n },\n {\n \"code\": null,\n \"e\": 28650,\n \"s\": 28566,\n \"text\": \"It is a numeric python module which provides fast maths functions for calculations.\"\n },\n {\n \"code\": null,\n \"e\": 28720,\n \"s\": 28650,\n \"text\": \"It is used to read data in numpy arrays and for manipulation purpose.\"\n },\n {\n \"code\": null,\n \"e\": 28821,\n \"s\": 28720,\n \"text\": \"Pandas :Used to read and write different files.Data manipulation can be done easily with dataframes.\"\n },\n {\n \"code\": null,\n \"e\": 28861,\n \"s\": 28821,\n \"text\": \"Used to read and write different files.\"\n },\n {\n \"code\": null,\n \"e\": 28915,\n \"s\": 28861,\n \"text\": \"Data manipulation can be done easily with dataframes.\"\n },\n {\n \"code\": null,\n \"e\": 29117,\n \"s\": 28915,\n \"text\": \"In Python, sklearn is the package which contains all the required packages to implement Machine learning algorithm. You can install the sklearn package by following the commands given below.using pip :\"\n },\n {\n \"code\": null,\n \"e\": 29145,\n \"s\": 29117,\n \"text\": \"pip install -U scikit-learn\"\n },\n {\n \"code\": null,\n \"e\": 29231,\n \"s\": 29145,\n \"text\": \"Before using the above command make sure you have scipy and numpy packages installed.\"\n },\n {\n \"code\": null,\n \"e\": 29279,\n \"s\": 29231,\n \"text\": \"If you don’t have pip. You can install it using\"\n },\n {\n \"code\": null,\n \"e\": 29297,\n \"s\": 29279,\n \"text\": \"python get-pip.py\"\n },\n {\n \"code\": null,\n \"e\": 29311,\n \"s\": 29297,\n \"text\": \"using conda :\"\n },\n {\n \"code\": null,\n \"e\": 29338,\n \"s\": 29311,\n \"text\": \"conda install scikit-learn\"\n },\n {\n \"code\": null,\n \"e\": 29404,\n \"s\": 29338,\n \"text\": \"At the beginning, we consider the whole training set as the root.\"\n },\n {\n \"code\": null,\n \"e\": 29527,\n \"s\": 29404,\n \"text\": \"Attributes are assumed to be categorical for information gain and for gini index, attributes are assumed to be continuous.\"\n },\n {\n \"code\": null,\n \"e\": 29597,\n \"s\": 29527,\n \"text\": \"On the basis of attribute values records are distributed recursively.\"\n },\n {\n \"code\": null,\n \"e\": 30268,\n \"s\": 29597,\n \"text\": \"We use statistical methods for ordering attributes as root or internal node.Pseudocode :Find the best attribute and place it on the root node of the tree.Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.Find leaf nodes in all branches by repeating 1 and 2 on each subset.While implementing the decision tree we will go through the following two phases:Building PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier.Operational PhaseMake predictions.Calculate the accuracy.Data Import :\"\n },\n {\n \"code\": null,\n \"e\": 30575,\n \"s\": 30268,\n \"text\": \"Find the best attribute and place it on the root node of the tree.Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.Find leaf nodes in all branches by repeating 1 and 2 on each subset.\"\n },\n {\n \"code\": null,\n \"e\": 30642,\n \"s\": 30575,\n \"text\": \"Find the best attribute and place it on the root node of the tree.\"\n },\n {\n \"code\": null,\n \"e\": 30815,\n \"s\": 30642,\n \"text\": \"Now, split the training set of the dataset into subsets. While making the subset make sure that each subset of training dataset should have the same value for an attribute.\"\n },\n {\n \"code\": null,\n \"e\": 30884,\n \"s\": 30815,\n \"text\": \"Find leaf nodes in all branches by repeating 1 and 2 on each subset.\"\n },\n {\n \"code\": null,\n \"e\": 30966,\n \"s\": 30884,\n \"text\": \"While implementing the decision tree we will go through the following two phases:\"\n },\n {\n \"code\": null,\n \"e\": 31149,\n \"s\": 30966,\n \"text\": \"Building PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier.Operational PhaseMake predictions.Calculate the accuracy.\"\n },\n {\n \"code\": null,\n \"e\": 31275,\n \"s\": 31149,\n \"text\": \"Building PhasePreprocess the dataset.Split the dataset from train and test using Python sklearn package.Train the classifier.\"\n },\n {\n \"code\": null,\n \"e\": 31299,\n \"s\": 31275,\n \"text\": \"Preprocess the dataset.\"\n },\n {\n \"code\": null,\n \"e\": 31367,\n \"s\": 31299,\n \"text\": \"Split the dataset from train and test using Python sklearn package.\"\n },\n {\n \"code\": null,\n \"e\": 31389,\n \"s\": 31367,\n \"text\": \"Train the classifier.\"\n },\n {\n \"code\": null,\n \"e\": 31447,\n \"s\": 31389,\n \"text\": \"Operational PhaseMake predictions.Calculate the accuracy.\"\n },\n {\n \"code\": null,\n \"e\": 31465,\n \"s\": 31447,\n \"text\": \"Make predictions.\"\n },\n {\n \"code\": null,\n \"e\": 31489,\n \"s\": 31465,\n \"text\": \"Calculate the accuracy.\"\n },\n {\n \"code\": null,\n \"e\": 31575,\n \"s\": 31489,\n \"text\": \"To import and manipulate the data we are using the pandas package provided in python.\"\n },\n {\n \"code\": null,\n \"e\": 31799,\n \"s\": 31575,\n \"text\": \"Here, we are using a URL which is directly fetching the dataset from the UCI site no need to download the dataset. When you try to run this code on your system make sure the system should have an active Internet connection.\"\n },\n {\n \"code\": null,\n \"e\": 31887,\n \"s\": 31799,\n \"text\": \"As the dataset is separated by “,” so we have to pass the sep parameter’s value as “,”.\"\n },\n {\n \"code\": null,\n \"e\": 32134,\n \"s\": 31887,\n \"text\": \"Another thing is notice is that the dataset doesn’t contain the header so we will pass the Header parameter’s value as none. If we will not pass the header parameter then it will consider the first line of the dataset as the header.Data Slicing :\"\n },\n {\n \"code\": null,\n \"e\": 32228,\n \"s\": 32134,\n \"text\": \"Before training the model we have to split the dataset into the training and testing dataset.\"\n },\n {\n \"code\": null,\n \"e\": 32323,\n \"s\": 32228,\n \"text\": \"To split the dataset for training and testing we are using the sklearn module train_test_split\"\n },\n {\n \"code\": null,\n \"e\": 32473,\n \"s\": 32323,\n \"text\": \"First of all we have to separate the target variable from the attributes in the dataset.X = balance_data.values[:, 1:5]\\nY = balance_data.values[:,0]\\n\"\n },\n {\n \"code\": null,\n \"e\": 32535,\n \"s\": 32473,\n \"text\": \"X = balance_data.values[:, 1:5]\\nY = balance_data.values[:,0]\\n\"\n },\n {\n \"code\": null,\n \"e\": 32702,\n \"s\": 32535,\n \"text\": \"Above are the lines from the code which separate the dataset. The variable X contains the attributes while the variable Y contains the target variable of the dataset.\"\n },\n {\n \"code\": null,\n \"e\": 32876,\n \"s\": 32702,\n \"text\": \"Next step is to split the dataset for training and testing purpose.X_train, X_test, y_train, y_test = train_test_split( \\n X, Y, test_size = 0.3, random_state = 100)\"\n },\n {\n \"code\": null,\n \"e\": 32983,\n \"s\": 32876,\n \"text\": \"X_train, X_test, y_train, y_test = train_test_split( \\n X, Y, test_size = 0.3, random_state = 100)\"\n },\n {\n \"code\": null,\n \"e\": 33170,\n \"s\": 32983,\n \"text\": \"Above line split the dataset for training and testing. As we are splitting the dataset in a ratio of 70:30 between training and testing so we are pass test_size parameter’s value as 0.3.\"\n },\n {\n \"code\": null,\n \"e\": 33471,\n \"s\": 33170,\n \"text\": \"random_state variable is a pseudo-random number generator state used for random sampling.Terms used in code :Gini index and information gain both of these methods are used to select from the n attributes of the dataset which attribute would be placed at the root node or the internal node.Gini index:\"\n },\n {\n \"code\": null,\n \"e\": 33652,\n \"s\": 33471,\n \"text\": \"Gini index and information gain both of these methods are used to select from the n attributes of the dataset which attribute would be placed at the root node or the internal node.\"\n },\n {\n \"code\": null,\n \"e\": 33664,\n \"s\": 33652,\n \"text\": \"Gini index:\"\n },\n {\n \"code\": null,\n \"e\": 33767,\n \"s\": 33664,\n \"text\": \"Gini Index is a metric to measure how often a randomly chosen element would be incorrectly identified.\"\n },\n {\n \"code\": null,\n \"e\": 33832,\n \"s\": 33767,\n \"text\": \"It means an attribute with lower gini index should be preferred.\"\n },\n {\n \"code\": null,\n \"e\": 33931,\n \"s\": 33832,\n \"text\": \"Sklearn supports “gini” criteria for Gini Index and by default, it takes “gini” value.Entropy: \"\n },\n {\n \"code\": null,\n \"e\": 33940,\n \"s\": 33931,\n \"text\": \"Entropy:\"\n },\n {\n \"code\": null,\n \"e\": 34148,\n \"s\": 33945,\n \"text\": \"Entropy is the measure of uncertainty of a random variable, it characterizes the impurity of an arbitrary collection of examples. The higher the entropy the more the information content.Information Gain\"\n },\n {\n \"code\": null,\n \"e\": 34165,\n \"s\": 34148,\n \"text\": \"Information Gain\"\n },\n {\n \"code\": null,\n \"e\": 34348,\n \"s\": 34165,\n \"text\": \"The entropy typically changes when we use a node in a decision tree to partition the training instances into smaller subsets. Information gain is a measure of this change in entropy.\"\n },\n {\n \"code\": null,\n \"e\": 34515,\n \"s\": 34348,\n \"text\": \"Sklearn supports “entropy” criteria for Information Gain and if we want to use Information Gain method in sklearn then we have to mention it explicitly.Accuracy score\"\n },\n {\n \"code\": null,\n \"e\": 34530,\n \"s\": 34515,\n \"text\": \"Accuracy score\"\n },\n {\n \"code\": null,\n \"e\": 34622,\n \"s\": 34530,\n \"text\": \"Accuracy score is used to calculate the accuracy of the trained classifier.Confusion Matrix\"\n },\n {\n \"code\": null,\n \"e\": 34639,\n \"s\": 34622,\n \"text\": \"Confusion Matrix\"\n },\n {\n \"code\": null,\n \"e\": 40563,\n \"s\": 34639,\n \"text\": \"Confusion Matrix is used to understand the trained classifier behavior over the test dataset or validate dataset.Recommended: Please try your approach on {IDE} first, before moving on to the solution.Below is the python code for the decision tree.# Run this program on your local python# interpreter, provided you have installed# the required libraries. # Importing the required packagesimport numpy as npimport pandas as pdfrom sklearn.metrics import confusion_matrixfrom sklearn.model_selection import train_test_splitfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.metrics import accuracy_scorefrom sklearn.metrics import classification_report # Function importing Datasetdef importdata(): balance_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-'+'databases/balance-scale/balance-scale.data', sep= ',', header = None) # Printing the dataswet shape print (\\\"Dataset Length: \\\", len(balance_data)) print (\\\"Dataset Shape: \\\", balance_data.shape) # Printing the dataset obseravtions print (\\\"Dataset: \\\",balance_data.head()) return balance_data # Function to split the datasetdef splitdataset(balance_data): # Separating the target variable X = balance_data.values[:, 1:5] Y = balance_data.values[:, 0] # Splitting the dataset into train and test X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100) return X, Y, X_train, X_test, y_train, y_test # Function to perform training with giniIndex.def train_using_gini(X_train, X_test, y_train): # Creating the classifier object clf_gini = DecisionTreeClassifier(criterion = \\\"gini\\\", random_state = 100,max_depth=3, min_samples_leaf=5) # Performing training clf_gini.fit(X_train, y_train) return clf_gini # Function to perform training with entropy.def tarin_using_entropy(X_train, X_test, y_train): # Decision tree with entropy clf_entropy = DecisionTreeClassifier( criterion = \\\"entropy\\\", random_state = 100, max_depth = 3, min_samples_leaf = 5) # Performing training clf_entropy.fit(X_train, y_train) return clf_entropy # Function to make predictionsdef prediction(X_test, clf_object): # Predicton on test with giniIndex y_pred = clf_object.predict(X_test) print(\\\"Predicted values:\\\") print(y_pred) return y_pred # Function to calculate accuracydef cal_accuracy(y_test, y_pred): print(\\\"Confusion Matrix: \\\", confusion_matrix(y_test, y_pred)) print (\\\"Accuracy : \\\", accuracy_score(y_test,y_pred)*100) print(\\\"Report : \\\", classification_report(y_test, y_pred)) # Driver codedef main(): # Building Phase data = importdata() X, Y, X_train, X_test, y_train, y_test = splitdataset(data) clf_gini = train_using_gini(X_train, X_test, y_train) clf_entropy = tarin_using_entropy(X_train, X_test, y_train) # Operational Phase print(\\\"Results Using Gini Index:\\\") # Prediction using gini y_pred_gini = prediction(X_test, clf_gini) cal_accuracy(y_test, y_pred_gini) print(\\\"Results Using Entropy:\\\") # Prediction using entropy y_pred_entropy = prediction(X_test, clf_entropy) cal_accuracy(y_test, y_pred_entropy) # Calling main functionif __name__==\\\"__main__\\\": main()Data Infomation:\\nDataset Length: 625\\nDataset Shape: (625, 5)\\nDataset: 0 1 2 3 4\\n0 B 1 1 1 1\\n1 R 1 1 1 2\\n2 R 1 1 1 3\\n3 R 1 1 1 4\\n4 R 1 1 1 5\\nResults Using Gini Index:\\nPredicted values:\\n['R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'L'\\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L'\\n 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'L' 'R'\\n 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L'\\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\\n 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\\n 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R'\\n 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'R']\\n\\nConfusion Matrix: [[ 0 6 7]\\n [ 0 67 18]\\n [ 0 19 71]]\\nAccuracy : 73.4042553191\\nReport : \\n precision recall f1-score support\\n B 0.00 0.00 0.00 13\\n L 0.73 0.79 0.76 85\\n R 0.74 0.79 0.76 90\\navg/total 0.68 0.73 0.71 188\\n\\nResults Using Entropy:\\nPredicted values:\\n['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L'\\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L'\\n 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L'\\n 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L'\\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\\n 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R'\\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']\\n\\nConfusion Matrix: [[ 0 6 7]\\n [ 0 63 22]\\n [ 0 20 70]]\\nAccuracy : 70.7446808511\\nReport : \\n precision recall f1-score support\\n B 0.00 0.00 0.00 13\\n L 0.71 0.74 0.72 85\\n R 0.71 0.78 0.74 90\\navg / total 0.66 0.71 0.68 188My Personal Notes\\narrow_drop_upSave\"\n },\n {\n \"code\": null,\n \"e\": 40611,\n \"s\": 40563,\n \"text\": \"Below is the python code for the decision tree.\"\n },\n {\n \"code\": \"# Run this program on your local python# interpreter, provided you have installed# the required libraries. # Importing the required packagesimport numpy as npimport pandas as pdfrom sklearn.metrics import confusion_matrixfrom sklearn.model_selection import train_test_splitfrom sklearn.tree import DecisionTreeClassifierfrom sklearn.metrics import accuracy_scorefrom sklearn.metrics import classification_report # Function importing Datasetdef importdata(): balance_data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-'+'databases/balance-scale/balance-scale.data', sep= ',', header = None) # Printing the dataswet shape print (\\\"Dataset Length: \\\", len(balance_data)) print (\\\"Dataset Shape: \\\", balance_data.shape) # Printing the dataset obseravtions print (\\\"Dataset: \\\",balance_data.head()) return balance_data # Function to split the datasetdef splitdataset(balance_data): # Separating the target variable X = balance_data.values[:, 1:5] Y = balance_data.values[:, 0] # Splitting the dataset into train and test X_train, X_test, y_train, y_test = train_test_split( X, Y, test_size = 0.3, random_state = 100) return X, Y, X_train, X_test, y_train, y_test # Function to perform training with giniIndex.def train_using_gini(X_train, X_test, y_train): # Creating the classifier object clf_gini = DecisionTreeClassifier(criterion = \\\"gini\\\", random_state = 100,max_depth=3, min_samples_leaf=5) # Performing training clf_gini.fit(X_train, y_train) return clf_gini # Function to perform training with entropy.def tarin_using_entropy(X_train, X_test, y_train): # Decision tree with entropy clf_entropy = DecisionTreeClassifier( criterion = \\\"entropy\\\", random_state = 100, max_depth = 3, min_samples_leaf = 5) # Performing training clf_entropy.fit(X_train, y_train) return clf_entropy # Function to make predictionsdef prediction(X_test, clf_object): # Predicton on test with giniIndex y_pred = clf_object.predict(X_test) print(\\\"Predicted values:\\\") print(y_pred) return y_pred # Function to calculate accuracydef cal_accuracy(y_test, y_pred): print(\\\"Confusion Matrix: \\\", confusion_matrix(y_test, y_pred)) print (\\\"Accuracy : \\\", accuracy_score(y_test,y_pred)*100) print(\\\"Report : \\\", classification_report(y_test, y_pred)) # Driver codedef main(): # Building Phase data = importdata() X, Y, X_train, X_test, y_train, y_test = splitdataset(data) clf_gini = train_using_gini(X_train, X_test, y_train) clf_entropy = tarin_using_entropy(X_train, X_test, y_train) # Operational Phase print(\\\"Results Using Gini Index:\\\") # Prediction using gini y_pred_gini = prediction(X_test, clf_gini) cal_accuracy(y_test, y_pred_gini) print(\\\"Results Using Entropy:\\\") # Prediction using entropy y_pred_entropy = prediction(X_test, clf_entropy) cal_accuracy(y_test, y_pred_entropy) # Calling main functionif __name__==\\\"__main__\\\": main()\",\n \"e\": 43724,\n \"s\": 40611,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 46254,\n \"s\": 43724,\n \"text\": \"Data Infomation:\\nDataset Length: 625\\nDataset Shape: (625, 5)\\nDataset: 0 1 2 3 4\\n0 B 1 1 1 1\\n1 R 1 1 1 2\\n2 R 1 1 1 3\\n3 R 1 1 1 4\\n4 R 1 1 1 5\\nResults Using Gini Index:\\nPredicted values:\\n['R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'L'\\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'L'\\n 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'L' 'R'\\n 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L'\\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\\n 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\\n 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R'\\n 'L' 'R' 'R' 'L' 'L' 'R' 'R' 'R']\\n\\nConfusion Matrix: [[ 0 6 7]\\n [ 0 67 18]\\n [ 0 19 71]]\\nAccuracy : 73.4042553191\\nReport : \\n precision recall f1-score support\\n B 0.00 0.00 0.00 13\\n L 0.73 0.79 0.76 85\\n R 0.74 0.79 0.76 90\\navg/total 0.68 0.73 0.71 188\\n\\nResults Using Entropy:\\nPredicted values:\\n['R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L'\\n 'L' 'R' 'L' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L'\\n 'L' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L'\\n 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'L' 'R' 'L' 'L' 'L' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'R' 'R' 'L' 'R' 'L'\\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'L' 'L' 'L' 'R' 'L' 'L' 'R' 'R' 'R' 'R' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L'\\n 'L' 'L' 'L' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'L' 'R'\\n 'L' 'R' 'R' 'L' 'L' 'R' 'L' 'R' 'R' 'R' 'R' 'R' 'L' 'R' 'R' 'R' 'R' 'R'\\n 'R' 'L' 'R' 'L' 'R' 'R' 'L' 'R' 'L' 'R' 'L' 'R' 'L' 'L' 'L' 'L' 'L' 'R'\\n 'R' 'R' 'L' 'L' 'L' 'R' 'R' 'R']\\n\\nConfusion Matrix: [[ 0 6 7]\\n [ 0 63 22]\\n [ 0 20 70]]\\nAccuracy : 70.7446808511\\nReport : \\n precision recall f1-score support\\n B 0.00 0.00 0.00 13\\n L 0.71 0.74 0.72 85\\n R 0.71 0.78 0.74 90\\navg / total 0.66 0.71 0.68 188\"\n },\n {\n \"code\": null,\n \"e\": 46271,\n \"s\": 46254,\n \"text\": \"Data Infomation:\"\n },\n {\n \"code\": null,\n \"e\": 46297,\n \"s\": 46271,\n \"text\": \"Results Using Gini Index:\"\n },\n {\n \"code\": null,\n \"e\": 46320,\n \"s\": 46297,\n \"text\": \"Results Using Entropy:\"\n },\n {\n \"code\": null,\n \"e\": 46334,\n \"s\": 46320,\n \"text\": \"shubham_singh\"\n },\n {\n \"code\": null,\n \"e\": 46344,\n \"s\": 46334,\n \"text\": \"knbarnwal\"\n },\n {\n \"code\": null,\n \"e\": 46357,\n \"s\": 46344,\n \"text\": \"khyatichat23\"\n },\n {\n \"code\": null,\n \"e\": 46383,\n \"s\": 46357,\n \"text\": \"Advanced Computer Subject\"\n },\n {\n \"code\": null,\n \"e\": 46400,\n \"s\": 46383,\n \"text\": \"Machine Learning\"\n },\n {\n \"code\": null,\n \"e\": 46419,\n \"s\": 46400,\n \"text\": \"Technical Scripter\"\n },\n {\n \"code\": null,\n \"e\": 46436,\n \"s\": 46419,\n \"text\": \"Machine Learning\"\n },\n {\n \"code\": null,\n \"e\": 46534,\n \"s\": 46436,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 46557,\n \"s\": 46534,\n \"text\": \"System Design Tutorial\"\n },\n {\n \"code\": null,\n \"e\": 46601,\n \"s\": 46557,\n \"text\": \"Copying Files to and from Docker Containers\"\n },\n {\n \"code\": null,\n \"e\": 46635,\n \"s\": 46601,\n \"text\": \"ML | Underfitting and Overfitting\"\n },\n {\n \"code\": null,\n \"e\": 46666,\n \"s\": 46635,\n \"text\": \"Clustering in Machine Learning\"\n },\n {\n \"code\": null,\n \"e\": 46692,\n \"s\": 46666,\n \"text\": \"Docker - COPY Instruction\"\n },\n {\n \"code\": null,\n \"e\": 46726,\n \"s\": 46692,\n \"text\": \"Agents in Artificial Intelligence\"\n },\n {\n \"code\": null,\n \"e\": 46766,\n \"s\": 46726,\n \"text\": \"Activation functions in Neural Networks\"\n },\n {\n \"code\": null,\n \"e\": 46807,\n \"s\": 46766,\n \"text\": \"Introduction to Recurrent Neural Network\"\n },\n {\n \"code\": null,\n \"e\": 46840,\n \"s\": 46807,\n \"text\": \"Support Vector Machine Algorithm\"\n }\n]"}}},{"rowIdx":534,"cells":{"title":{"kind":"string","value":"Sum of all the levels in a Binary Search Tree - GeeksforGeeks"},"text":{"kind":"string","value":"02 Nov, 2021\nGiven a Binary Search Tree, the task is to find the horizontal sum of the nodes that are in the same level.Examples: \nInput: \nOutput: 6 12 24Input: \nOutput: 6 12 12 \n \nApproach: Find the height of the given binary tree then the number of levels in the tree will be levels = height + 1. Now create an array sum[] of size levels where sum[i] will store the sum of all the nodes at the ith level. In order to update this array, write a recursive function that add the current node’s data at sum[level] where level is the level of the current node and then recursively call the same method for the child nodes with level as level + 1.Below is the implementation of the above approach: \nC++\nJava\nPython3\nC#\nJavascript\n// C++ implementation of the approach#include #include using namespace std; // A Binary Tree Nodestruct Node { int data; struct Node *left, *right;}; // Utility function to create a new tree nodeNode* newNode(int data){ Node* temp = new Node; temp->data = data; temp->left = temp->right = NULL; return temp;} // Utility function to print// the contents of an arrayvoid printArr(int arr[], int n){ for (int i = 0; i < n; i++) cout << arr[i] << endl;} // Function to return the height// of the binary treeint getHeight(Node* root){ if (root->left == NULL && root->right == NULL) return 0; int left = 0; if (root->left != NULL) left = getHeight(root->left); int right = 0; if (root->right != NULL) right = getHeight(root->right); return (max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelvoid calculateLevelSum(Node* node, int level, int sum[]){ if (node == NULL) return; // Add current node data to the sum // of the current node's level sum[level] += node->data; // Recursive call for left and right sub-tree calculateLevelSum(node->left, level + 1, sum); calculateLevelSum(node->right, level + 1, sum);} // Driver codeint main(){ // Create the binary tree Node* root = newNode(6); root->left = newNode(4); root->right = newNode(8); root->left->left = newNode(3); root->left->right = newNode(5); root->right->left = newNode(7); root->right->right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int sum[levels] = { 0 }; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels); return 0;}\n// Java implementation of the approachclass Sol{ // A Binary Tree Nodestatic class Node{ int data; Node left, right;}; // Utility function to create a new tree nodestatic Node newNode(int data){ Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Utility function to print// the contents of an arraystatic void printArr(int arr[], int n){ for (int i = 0; i < n; i++) System.out.print(arr[i]+ \" \" );} // Function to return the height// of the binary treestatic int getHeight(Node root){ if (root.left == null && root.right == null) return 0; int left = 0; if (root.left != null) left = getHeight(root.left); int right = 0; if (root.right != null) right = getHeight(root.right); return (Math.max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelstatic void calculateLevelSum(Node node, int level, int sum[]){ if (node == null) return; // Add current node data to the sum // of the current node's level sum[level] += node.data; // Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1, sum); calculateLevelSum(node.right, level + 1, sum);} // Driver codepublic static void main(String args[]){ // Create the binary tree Node root = newNode(6); root.left = newNode(4); root.right = newNode(8); root.left.left = newNode(3); root.left.right = newNode(5); root.right.left = newNode(7); root.right.right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int sum[]=new int[levels]; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels);}} // This code is contributed by andrew1234\n# Python3 implementation of above algorithm # Utility class to create a nodeclass Node: def __init__(self, key): self.data = key self.left = self.right = None # Utility function to create a tree nodedef newNode( data): temp = Node(0) temp.data = data temp.left = temp.right = None return temp # Utility function to print# the contents of an arraydef printArr(arr, n): i = 0 while ( i < n): print( arr[i]) i = i + 1 # Function to return the height# of the binary treedef getHeight(root): if (root.left == None and root.right == None): return 0 left = 0 if (root.left != None): left = getHeight(root.left) right = 0 if (root.right != None): right = getHeight(root.right) return (max(left, right) + 1) sum = [] # Recursive function to update sum[] array# such that sum[i] stores the sum# of all the elements at ith leveldef calculateLevelSum(node, level): global sum if (node == None): return # Add current node data to the sum # of the current node's level sum[level] += node.data # Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1) calculateLevelSum(node.right, level + 1) # Driver code # Create the binary treeroot = newNode(6)root.left = newNode(4)root.right = newNode(8)root.left.left = newNode(3)root.left.right = newNode(5)root.right.left = newNode(7)root.right.right = newNode(9) # Count of levels in the# given binary treelevels = getHeight(root) + 1 # To store the sum at every levelsum = [0] * levelscalculateLevelSum(root, 0) # Print the required sumsprintArr(sum, levels) # This code is contributed by Arnab Kundu\n// C# implementation of the approachusing System;class GFG{ // A Binary Tree Nodepublic class Node{ public int data; public Node left, right;}; // Utility function to create a new tree nodestatic Node newNode(int data){ Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Utility function to print// the contents of an arraystatic void printArr(int []arr, int n){ for (int i = 0; i < n; i++) Console.WriteLine(arr[i]);} // Function to return the height// of the binary treestatic int getHeight(Node root){ if (root.left == null && root.right == null) return 0; int left = 0; if (root.left != null) left = getHeight(root.left); int right = 0; if (root.right != null) right = getHeight(root.right); return (Math.Max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelstatic void calculateLevelSum(Node node, int level, int []sum){ if (node == null) return; // Add current node data to the sum // of the current node's level sum[level] += node.data; // Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1, sum); calculateLevelSum(node.right, level + 1, sum);} // Driver codepublic static void Main(String []args){ // Create the binary tree Node root = newNode(6); root.left = newNode(4); root.right = newNode(8); root.left.left = newNode(3); root.left.right = newNode(5); root.right.left = newNode(7); root.right.right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int []sum = new int[levels]; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels);}} // This code is contributed by 29AjayKumar\n\n6\n12\n24\n \nTime Complexity : O(N)Auxiliary Space: O(N) \nandrew1234\n29AjayKumar\npankajsharmagfg\navanitrachhadiya2155\ngabaa406\namartyaghoshgfg\nArrays\nBinary Search Tree\nData Structures\nData Structures\nArrays\nBinary Search Tree\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nCount pairs with given sum\nChocolate Distribution Problem\nWindow Sliding Technique\nReversal algorithm for array rotation\nNext Greater Element\nBinary Search Tree | Set 1 (Search and Insertion)\nAVL Tree | Set 1 (Insertion)\nBinary Search Tree | Set 2 (Delete)\nA program to check if a binary tree is BST or not\nConstruct BST from given preorder traversal | Set 1"},"parsed":{"kind":"list like","value":[{"code":null,"e":26065,"s":26037,"text":"\n02 Nov, 2021"},{"code":null,"e":26184,"s":26065,"text":"Given a Binary Search Tree, the task is to find the horizontal sum of the nodes that are in the same level.Examples: "},{"code":null,"e":26193,"s":26184,"text":"Input: "},{"code":null,"e":26217,"s":26193,"text":"Output: 6 12 24Input: "},{"code":null,"e":26235,"s":26217,"text":"Output: 6 12 12 "},{"code":null,"e":26752,"s":26237,"text":"Approach: Find the height of the given binary tree then the number of levels in the tree will be levels = height + 1. Now create an array sum[] of size levels where sum[i] will store the sum of all the nodes at the ith level. In order to update this array, write a recursive function that add the current node’s data at sum[level] where level is the level of the current node and then recursively call the same method for the child nodes with level as level + 1.Below is the implementation of the above approach: "},{"code":null,"e":26756,"s":26752,"text":"C++"},{"code":null,"e":26761,"s":26756,"text":"Java"},{"code":null,"e":26769,"s":26761,"text":"Python3"},{"code":null,"e":26772,"s":26769,"text":"C#"},{"code":null,"e":26783,"s":26772,"text":"Javascript"},{"code":"// C++ implementation of the approach#include #include using namespace std; // A Binary Tree Nodestruct Node { int data; struct Node *left, *right;}; // Utility function to create a new tree nodeNode* newNode(int data){ Node* temp = new Node; temp->data = data; temp->left = temp->right = NULL; return temp;} // Utility function to print// the contents of an arrayvoid printArr(int arr[], int n){ for (int i = 0; i < n; i++) cout << arr[i] << endl;} // Function to return the height// of the binary treeint getHeight(Node* root){ if (root->left == NULL && root->right == NULL) return 0; int left = 0; if (root->left != NULL) left = getHeight(root->left); int right = 0; if (root->right != NULL) right = getHeight(root->right); return (max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelvoid calculateLevelSum(Node* node, int level, int sum[]){ if (node == NULL) return; // Add current node data to the sum // of the current node's level sum[level] += node->data; // Recursive call for left and right sub-tree calculateLevelSum(node->left, level + 1, sum); calculateLevelSum(node->right, level + 1, sum);} // Driver codeint main(){ // Create the binary tree Node* root = newNode(6); root->left = newNode(4); root->right = newNode(8); root->left->left = newNode(3); root->left->right = newNode(5); root->right->left = newNode(7); root->right->right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int sum[levels] = { 0 }; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels); return 0;}","e":28636,"s":26783,"text":null},{"code":"// Java implementation of the approachclass Sol{ // A Binary Tree Nodestatic class Node{ int data; Node left, right;}; // Utility function to create a new tree nodestatic Node newNode(int data){ Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Utility function to print// the contents of an arraystatic void printArr(int arr[], int n){ for (int i = 0; i < n; i++) System.out.print(arr[i]+ \" \" );} // Function to return the height// of the binary treestatic int getHeight(Node root){ if (root.left == null && root.right == null) return 0; int left = 0; if (root.left != null) left = getHeight(root.left); int right = 0; if (root.right != null) right = getHeight(root.right); return (Math.max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelstatic void calculateLevelSum(Node node, int level, int sum[]){ if (node == null) return; // Add current node data to the sum // of the current node's level sum[level] += node.data; // Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1, sum); calculateLevelSum(node.right, level + 1, sum);} // Driver codepublic static void main(String args[]){ // Create the binary tree Node root = newNode(6); root.left = newNode(4); root.right = newNode(8); root.left.left = newNode(3); root.left.right = newNode(5); root.right.left = newNode(7); root.right.right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int sum[]=new int[levels]; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels);}} // This code is contributed by andrew1234","e":30520,"s":28636,"text":null},{"code":"# Python3 implementation of above algorithm # Utility class to create a nodeclass Node: def __init__(self, key): self.data = key self.left = self.right = None # Utility function to create a tree nodedef newNode( data): temp = Node(0) temp.data = data temp.left = temp.right = None return temp # Utility function to print# the contents of an arraydef printArr(arr, n): i = 0 while ( i < n): print( arr[i]) i = i + 1 # Function to return the height# of the binary treedef getHeight(root): if (root.left == None and root.right == None): return 0 left = 0 if (root.left != None): left = getHeight(root.left) right = 0 if (root.right != None): right = getHeight(root.right) return (max(left, right) + 1) sum = [] # Recursive function to update sum[] array# such that sum[i] stores the sum# of all the elements at ith leveldef calculateLevelSum(node, level): global sum if (node == None): return # Add current node data to the sum # of the current node's level sum[level] += node.data # Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1) calculateLevelSum(node.right, level + 1) # Driver code # Create the binary treeroot = newNode(6)root.left = newNode(4)root.right = newNode(8)root.left.left = newNode(3)root.left.right = newNode(5)root.right.left = newNode(7)root.right.right = newNode(9) # Count of levels in the# given binary treelevels = getHeight(root) + 1 # To store the sum at every levelsum = [0] * levelscalculateLevelSum(root, 0) # Print the required sumsprintArr(sum, levels) # This code is contributed by Arnab Kundu","e":32218,"s":30520,"text":null},{"code":"// C# implementation of the approachusing System;class GFG{ // A Binary Tree Nodepublic class Node{ public int data; public Node left, right;}; // Utility function to create a new tree nodestatic Node newNode(int data){ Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Utility function to print// the contents of an arraystatic void printArr(int []arr, int n){ for (int i = 0; i < n; i++) Console.WriteLine(arr[i]);} // Function to return the height// of the binary treestatic int getHeight(Node root){ if (root.left == null && root.right == null) return 0; int left = 0; if (root.left != null) left = getHeight(root.left); int right = 0; if (root.right != null) right = getHeight(root.right); return (Math.Max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelstatic void calculateLevelSum(Node node, int level, int []sum){ if (node == null) return; // Add current node data to the sum // of the current node's level sum[level] += node.data; // Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1, sum); calculateLevelSum(node.right, level + 1, sum);} // Driver codepublic static void Main(String []args){ // Create the binary tree Node root = newNode(6); root.left = newNode(4); root.right = newNode(8); root.left.left = newNode(3); root.left.right = newNode(5); root.right.left = newNode(7); root.right.right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int []sum = new int[levels]; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels);}} // This code is contributed by 29AjayKumar","e":34172,"s":32218,"text":null},{"code":"","e":35870,"s":34172,"text":null},{"code":null,"e":35878,"s":35870,"text":"6\n12\n24"},{"code":null,"e":35925,"s":35880,"text":"Time Complexity : O(N)Auxiliary Space: O(N) "},{"code":null,"e":35936,"s":35925,"text":"andrew1234"},{"code":null,"e":35948,"s":35936,"text":"29AjayKumar"},{"code":null,"e":35964,"s":35948,"text":"pankajsharmagfg"},{"code":null,"e":35985,"s":35964,"text":"avanitrachhadiya2155"},{"code":null,"e":35994,"s":35985,"text":"gabaa406"},{"code":null,"e":36010,"s":35994,"text":"amartyaghoshgfg"},{"code":null,"e":36017,"s":36010,"text":"Arrays"},{"code":null,"e":36036,"s":36017,"text":"Binary Search Tree"},{"code":null,"e":36052,"s":36036,"text":"Data Structures"},{"code":null,"e":36068,"s":36052,"text":"Data Structures"},{"code":null,"e":36075,"s":36068,"text":"Arrays"},{"code":null,"e":36094,"s":36075,"text":"Binary Search Tree"},{"code":null,"e":36192,"s":36094,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":36219,"s":36192,"text":"Count pairs with given sum"},{"code":null,"e":36250,"s":36219,"text":"Chocolate Distribution Problem"},{"code":null,"e":36275,"s":36250,"text":"Window Sliding Technique"},{"code":null,"e":36313,"s":36275,"text":"Reversal algorithm for array rotation"},{"code":null,"e":36334,"s":36313,"text":"Next Greater Element"},{"code":null,"e":36384,"s":36334,"text":"Binary Search Tree | Set 1 (Search and Insertion)"},{"code":null,"e":36413,"s":36384,"text":"AVL Tree | Set 1 (Insertion)"},{"code":null,"e":36449,"s":36413,"text":"Binary Search Tree | Set 2 (Delete)"},{"code":null,"e":36499,"s":36449,"text":"A program to check if a binary tree is BST or not"}],"string":"[\n {\n \"code\": null,\n \"e\": 26065,\n \"s\": 26037,\n \"text\": \"\\n02 Nov, 2021\"\n },\n {\n \"code\": null,\n \"e\": 26184,\n \"s\": 26065,\n \"text\": \"Given a Binary Search Tree, the task is to find the horizontal sum of the nodes that are in the same level.Examples: \"\n },\n {\n \"code\": null,\n \"e\": 26193,\n \"s\": 26184,\n \"text\": \"Input: \"\n },\n {\n \"code\": null,\n \"e\": 26217,\n \"s\": 26193,\n \"text\": \"Output: 6 12 24Input: \"\n },\n {\n \"code\": null,\n \"e\": 26235,\n \"s\": 26217,\n \"text\": \"Output: 6 12 12 \"\n },\n {\n \"code\": null,\n \"e\": 26752,\n \"s\": 26237,\n \"text\": \"Approach: Find the height of the given binary tree then the number of levels in the tree will be levels = height + 1. Now create an array sum[] of size levels where sum[i] will store the sum of all the nodes at the ith level. In order to update this array, write a recursive function that add the current node’s data at sum[level] where level is the level of the current node and then recursively call the same method for the child nodes with level as level + 1.Below is the implementation of the above approach: \"\n },\n {\n \"code\": null,\n \"e\": 26756,\n \"s\": 26752,\n \"text\": \"C++\"\n },\n {\n \"code\": null,\n \"e\": 26761,\n \"s\": 26756,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 26769,\n \"s\": 26761,\n \"text\": \"Python3\"\n },\n {\n \"code\": null,\n \"e\": 26772,\n \"s\": 26769,\n \"text\": \"C#\"\n },\n {\n \"code\": null,\n \"e\": 26783,\n \"s\": 26772,\n \"text\": \"Javascript\"\n },\n {\n \"code\": \"// C++ implementation of the approach#include #include using namespace std; // A Binary Tree Nodestruct Node { int data; struct Node *left, *right;}; // Utility function to create a new tree nodeNode* newNode(int data){ Node* temp = new Node; temp->data = data; temp->left = temp->right = NULL; return temp;} // Utility function to print// the contents of an arrayvoid printArr(int arr[], int n){ for (int i = 0; i < n; i++) cout << arr[i] << endl;} // Function to return the height// of the binary treeint getHeight(Node* root){ if (root->left == NULL && root->right == NULL) return 0; int left = 0; if (root->left != NULL) left = getHeight(root->left); int right = 0; if (root->right != NULL) right = getHeight(root->right); return (max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelvoid calculateLevelSum(Node* node, int level, int sum[]){ if (node == NULL) return; // Add current node data to the sum // of the current node's level sum[level] += node->data; // Recursive call for left and right sub-tree calculateLevelSum(node->left, level + 1, sum); calculateLevelSum(node->right, level + 1, sum);} // Driver codeint main(){ // Create the binary tree Node* root = newNode(6); root->left = newNode(4); root->right = newNode(8); root->left->left = newNode(3); root->left->right = newNode(5); root->right->left = newNode(7); root->right->right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int sum[levels] = { 0 }; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels); return 0;}\",\n \"e\": 28636,\n \"s\": 26783,\n \"text\": null\n },\n {\n \"code\": \"// Java implementation of the approachclass Sol{ // A Binary Tree Nodestatic class Node{ int data; Node left, right;}; // Utility function to create a new tree nodestatic Node newNode(int data){ Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Utility function to print// the contents of an arraystatic void printArr(int arr[], int n){ for (int i = 0; i < n; i++) System.out.print(arr[i]+ \\\" \\\" );} // Function to return the height// of the binary treestatic int getHeight(Node root){ if (root.left == null && root.right == null) return 0; int left = 0; if (root.left != null) left = getHeight(root.left); int right = 0; if (root.right != null) right = getHeight(root.right); return (Math.max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelstatic void calculateLevelSum(Node node, int level, int sum[]){ if (node == null) return; // Add current node data to the sum // of the current node's level sum[level] += node.data; // Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1, sum); calculateLevelSum(node.right, level + 1, sum);} // Driver codepublic static void main(String args[]){ // Create the binary tree Node root = newNode(6); root.left = newNode(4); root.right = newNode(8); root.left.left = newNode(3); root.left.right = newNode(5); root.right.left = newNode(7); root.right.right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int sum[]=new int[levels]; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels);}} // This code is contributed by andrew1234\",\n \"e\": 30520,\n \"s\": 28636,\n \"text\": null\n },\n {\n \"code\": \"# Python3 implementation of above algorithm # Utility class to create a nodeclass Node: def __init__(self, key): self.data = key self.left = self.right = None # Utility function to create a tree nodedef newNode( data): temp = Node(0) temp.data = data temp.left = temp.right = None return temp # Utility function to print# the contents of an arraydef printArr(arr, n): i = 0 while ( i < n): print( arr[i]) i = i + 1 # Function to return the height# of the binary treedef getHeight(root): if (root.left == None and root.right == None): return 0 left = 0 if (root.left != None): left = getHeight(root.left) right = 0 if (root.right != None): right = getHeight(root.right) return (max(left, right) + 1) sum = [] # Recursive function to update sum[] array# such that sum[i] stores the sum# of all the elements at ith leveldef calculateLevelSum(node, level): global sum if (node == None): return # Add current node data to the sum # of the current node's level sum[level] += node.data # Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1) calculateLevelSum(node.right, level + 1) # Driver code # Create the binary treeroot = newNode(6)root.left = newNode(4)root.right = newNode(8)root.left.left = newNode(3)root.left.right = newNode(5)root.right.left = newNode(7)root.right.right = newNode(9) # Count of levels in the# given binary treelevels = getHeight(root) + 1 # To store the sum at every levelsum = [0] * levelscalculateLevelSum(root, 0) # Print the required sumsprintArr(sum, levels) # This code is contributed by Arnab Kundu\",\n \"e\": 32218,\n \"s\": 30520,\n \"text\": null\n },\n {\n \"code\": \"// C# implementation of the approachusing System;class GFG{ // A Binary Tree Nodepublic class Node{ public int data; public Node left, right;}; // Utility function to create a new tree nodestatic Node newNode(int data){ Node temp = new Node(); temp.data = data; temp.left = temp.right = null; return temp;} // Utility function to print// the contents of an arraystatic void printArr(int []arr, int n){ for (int i = 0; i < n; i++) Console.WriteLine(arr[i]);} // Function to return the height// of the binary treestatic int getHeight(Node root){ if (root.left == null && root.right == null) return 0; int left = 0; if (root.left != null) left = getHeight(root.left); int right = 0; if (root.right != null) right = getHeight(root.right); return (Math.Max(left, right) + 1);} // Recursive function to update sum[] array// such that sum[i] stores the sum// of all the elements at ith levelstatic void calculateLevelSum(Node node, int level, int []sum){ if (node == null) return; // Add current node data to the sum // of the current node's level sum[level] += node.data; // Recursive call for left and right sub-tree calculateLevelSum(node.left, level + 1, sum); calculateLevelSum(node.right, level + 1, sum);} // Driver codepublic static void Main(String []args){ // Create the binary tree Node root = newNode(6); root.left = newNode(4); root.right = newNode(8); root.left.left = newNode(3); root.left.right = newNode(5); root.right.left = newNode(7); root.right.right = newNode(9); // Count of levels in the // given binary tree int levels = getHeight(root) + 1; // To store the sum at every level int []sum = new int[levels]; calculateLevelSum(root, 0, sum); // Print the required sums printArr(sum, levels);}} // This code is contributed by 29AjayKumar\",\n \"e\": 34172,\n \"s\": 32218,\n \"text\": null\n },\n {\n \"code\": \"\",\n \"e\": 35870,\n \"s\": 34172,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 35878,\n \"s\": 35870,\n \"text\": \"6\\n12\\n24\"\n },\n {\n \"code\": null,\n \"e\": 35925,\n \"s\": 35880,\n \"text\": \"Time Complexity : O(N)Auxiliary Space: O(N) \"\n },\n {\n \"code\": null,\n \"e\": 35936,\n \"s\": 35925,\n \"text\": \"andrew1234\"\n },\n {\n \"code\": null,\n \"e\": 35948,\n \"s\": 35936,\n \"text\": \"29AjayKumar\"\n },\n {\n \"code\": null,\n \"e\": 35964,\n \"s\": 35948,\n \"text\": \"pankajsharmagfg\"\n },\n {\n \"code\": null,\n \"e\": 35985,\n \"s\": 35964,\n \"text\": \"avanitrachhadiya2155\"\n },\n {\n \"code\": null,\n \"e\": 35994,\n \"s\": 35985,\n \"text\": \"gabaa406\"\n },\n {\n \"code\": null,\n \"e\": 36010,\n \"s\": 35994,\n \"text\": \"amartyaghoshgfg\"\n },\n {\n \"code\": null,\n \"e\": 36017,\n \"s\": 36010,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 36036,\n \"s\": 36017,\n \"text\": \"Binary Search Tree\"\n },\n {\n \"code\": null,\n \"e\": 36052,\n \"s\": 36036,\n \"text\": \"Data Structures\"\n },\n {\n \"code\": null,\n \"e\": 36068,\n \"s\": 36052,\n \"text\": \"Data Structures\"\n },\n {\n \"code\": null,\n \"e\": 36075,\n \"s\": 36068,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 36094,\n \"s\": 36075,\n \"text\": \"Binary Search Tree\"\n },\n {\n \"code\": null,\n \"e\": 36192,\n \"s\": 36094,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 36219,\n \"s\": 36192,\n \"text\": \"Count pairs with given sum\"\n },\n {\n \"code\": null,\n \"e\": 36250,\n \"s\": 36219,\n \"text\": \"Chocolate Distribution Problem\"\n },\n {\n \"code\": null,\n \"e\": 36275,\n \"s\": 36250,\n \"text\": \"Window Sliding Technique\"\n },\n {\n \"code\": null,\n \"e\": 36313,\n \"s\": 36275,\n \"text\": \"Reversal algorithm for array rotation\"\n },\n {\n \"code\": null,\n \"e\": 36334,\n \"s\": 36313,\n \"text\": \"Next Greater Element\"\n },\n {\n \"code\": null,\n \"e\": 36384,\n \"s\": 36334,\n \"text\": \"Binary Search Tree | Set 1 (Search and Insertion)\"\n },\n {\n \"code\": null,\n \"e\": 36413,\n \"s\": 36384,\n \"text\": \"AVL Tree | Set 1 (Insertion)\"\n },\n {\n \"code\": null,\n \"e\": 36449,\n \"s\": 36413,\n \"text\": \"Binary Search Tree | Set 2 (Delete)\"\n },\n {\n \"code\": null,\n \"e\": 36499,\n \"s\": 36449,\n \"text\": \"A program to check if a binary tree is BST or not\"\n }\n]"}}},{"rowIdx":535,"cells":{"title":{"kind":"string","value":"How to use Array.BinarySearch() Method in C# | Set -1 - GeeksforGeeks"},"text":{"kind":"string","value":"29 May, 2021\nArray.BinarySearch() method is used to search a value in a sorted one dimensional array. The binary search algorithm is used by this method. This algorithm searches a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise, narrow it to the upper half. Repeatedly check until the value is found or the interval is empty.\nImportant Points:\nBefore calling this method, the array must be sorted.\nThis method will return the negative integer if the array doesn’t contain the specified value.\nThe array must be one-dimensional otherwise this method can’t be used.\nThe Icomparable interface must be implemented by the value or every element of the array.\nThe method will return the index of only one of the occurrences if more than one matched elements found in the array and it is not necessary that index will be of the first occurrence.\nThere are total 8 methods in the overload list of this method as follows:\nBinarySearch(Array, Object)\nBinarySearch(Array, Object, IComparer)\nBinarySearch(Array, Int32, Int32, Object)\nBinarySearch(Array, Int32, Int32, Object, IComparer)\nBinarySearch(T[], T)\nBinarySearch(T[], T, IComparer)\nBinarySearch(T[], Int32, Int32, T)\nBinarySearch(T[], Int32, Int32, T, IComparer)\nThis method is used to search a specific element in the entire 1-D sorted array. It used the IComparable interface that is implemented by each element of the 1-D array and the specified object. This method is an O(log n) operation, where n is the Length of the specified array. \nSyntax: public static int BinarySearch (Array arr, object val);Parameters: arr: It is the sorted 1-D array to search. val: It is the object to search for. \nReturn Value: It returns the index of the specified valin the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows:\nIf the val is not found and valis less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val.\nIf the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1).\nIf this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr.\nExceptions: \nArgumentNullException: If the arr is null.\nRankException: If the arr is multidimensional.\nArgumentException: If the val is of a type which is not compatible with the elements of the arr.\nInvalidOperationException: If the val does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface.\nBelow programs illustrate the above-discussed method:\nExample 1: \nC#\n// C# program to illustrate the// Array.BinarySearch(Array, Object)// Methodusing System; class GFG { // Main Method public static void Main(String[] args) { // taking an 1-D Array int[] arr = new int[7] { 1, 5, 7, 4, 6, 2, 3 }; // for this method array // must be sorted Array.Sort(arr); Console.Write(\"The elements of Sorted Array: \"); // calling the method to // print the values display(arr); // taking the element which is // to search for in a variable // It is not present in the array object s = 8; // calling the method containing // BinarySearch method result(arr, s); // taking the element which is // to search for in a variable // It is present in the array object s1 = 4; // calling the method containing // BinarySearch method result(arr, s1); } // containing BinarySearch Method static void result(int[] arr2, object k) { // using the method int res = Array.BinarySearch(arr2, k); if (res < 0) { Console.WriteLine(\"\\nThe element to search for \" + \"({0}) is not found.\", k); } else { Console.WriteLine(\"The element to search for \" + \"({0}) is at index {1}.\", k, res); } } // display method static void display(int[] arr1) { // Displaying Elements of array foreach(int i in arr1) Console.Write(i + \" \"); }}\nThe elements of Sorted Array: 1 2 3 4 5 6 7 \nThe element to search for (8) is not found.\nThe element to search for (4) is at index 3.\n \nExample 2:\nC#\n// C# program to illustrate the// Array.BinarySearch(Array, Object)// Methodusing System; class GFG { // Main Method public static void Main(String[] args) { // taking an 1-D Array int[] arr = new int[7] { 1, 5, 7, 4, 6, 2, 3 }; // for this method array // must be sorted Array.Sort(arr); Console.Write(\"The elements of Sorted Array: \"); // calling the method to // print the values display(arr); // it will return a negative value as // 9 is not present in the array Console.WriteLine(\"\\nIndex of 9 is: \" + Array.BinarySearch(arr, 8)); } // display method static void display(int[] arr1) { // Displaying Elements of array foreach(int i in arr1) Console.Write(i + \" \"); }}\nThe elements of Sorted Array: 1 2 3 4 5 6 7 \nIndex of 9 is: -8\n \nThis method is used to search a specific element in the entire 1-D sorted array using the specified IComparer interface.\nSyntax: public static int BinarySearch(Array arr, Object val, IComparer comparer)Parameters: arr : The one-dimensional sorted array in which the search will happen. val : The object value which is to search for. comparer : When comparing elements then the IComparer implementation is used. \nReturn Value: It returns the index of the specified val in the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows: \nIf the val is not found and val is less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val.\nIf the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1).\nIf this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr.\nExceptions: \nArgumentNullException: If the arr is null.\nRankException: If arr is multidimensional.\nArgumentException: If the range is less than lower bound OR length is less than 0.\nArgumentException: If the comparer is null, and value is of a type that is not compatible with the elements of arr.\nInvalidOperationException: If the comparer is null, value does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface.\nExample: \nC#\n// C# program to demonstrate the// Array.BinarySearch(Array,// Object, IComparer) Methodusing System; class GFG { // Main Method public static void Main() { // initializes a new Array. Array arr = Array.CreateInstance(typeof(Int32), 5); // Array elements arr.SetValue(20, 0); arr.SetValue(10, 1); arr.SetValue(30, 2); arr.SetValue(40, 3); arr.SetValue(50, 4); Console.WriteLine(\"The original Array\"); // calling \"display\" function display(arr); Console.WriteLine(\"\\nsorted array\"); // sorting the Array Array.Sort(arr); display(arr); Console.WriteLine(\"\\n1st call\"); // search for object 10 object obj1 = 10; // call the \"FindObj\" function FindObj(arr, obj1); Console.WriteLine(\"\\n2nd call\"); object obj2 = 60; FindObj(arr, obj2); } // find object method public static void FindObj(Array Arr, object Obj) { int index = Array.BinarySearch(Arr, Obj, StringComparer.CurrentCulture); if (index < 0) { Console.WriteLine(\"The object {0} is not found\\nNext\" + \" larger object is at index {1}\", Obj, ~index); } else { Console.WriteLine(\"The object {0} is at index {1}\", Obj, index); } } // display method public static void display(Array arr) { foreach(int g in arr) { Console.WriteLine(g); } }}\nThe original Array\n20\n10\n30\n40\n50\n\nsorted array\n10\n20\n30\n40\n50\n\n1st call\nThe object 10 is at index 0\n\n2nd call\nThe object 60 is not found\nNext larger object is at index 5\n \nThis method is used to search a value in the range of elements in a 1-D sorted array. It uses the IComparable interface implemented by each element of the array and the specified value. It searches only in a specified boundary which is defined by the user.\nSyntax: public static int BinarySearch(Array arr, int i, int len, object val);Parameters: arr: It is 1-D array in which the user have to search for an element. i: It is the starting index of the range from where the user want to start the search. len: It is the length of the range in which the user want to search. val: It is the value which the user to search for. \nReturn Value: It returns the index of the specified val in the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows: \nIf the val is not found and val is less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val.\nIf the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1).\nIf this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr.\nExceptions: \nArgumentNullException: If the arr is null.\nRankException: If arr is multidimensional.\nArgumentOutOfRangeException: If the index is less than lower bound of array OR length is less than 0.\nArgumentException: If the index and length do not specify the valid range in array OR the value is of the type which is not compatible with the elements of the array.\nInvalidOperationException: If value does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface.\nExample: \nC#\n// C# Program to illustrate the use of// Array.BinarySearch(Array, Int32,// Int32, Object) Methodusing System;using System.IO; class GFG { // Main Method static void Main() { // initializing the integer array int[] intArr = { 42, 5, 7, 12, 56, 1, 32 }; // sorts the intArray as it must be // sorted before using method Array.Sort(intArr); // printing the sorted array foreach(int i in intArr) Console.Write(i + \" \" + \"\\n\"); // intArr is the array we want to find // and 1 is the starting index // of the range to search. 5 is the // length of the range to search. // 32 is the object to search int index = Array.BinarySearch(intArr, 1, 5, 32); if (index >= 0) { // if the element is found it // returns the index of the element Console.WriteLine(\"Index of 32 is : \" + index); } else { // if the element is not // present in the array or // if it is not in the // specified range it prints this Console.Write(\"Element is not found\"); } // intArr is the array we want to // find. and 1 is the starting // index of the range to search. 5 is // the length of the range to search // 44 is the object to search int index1 = Array.BinarySearch(intArr, 1, 5, 44); // as the element is not present // it prints a negative value. Console.WriteLine(\"Index of 44 is :\" + index1); }}\n1 \n5 \n7 \n12 \n32 \n42 \n56 \nIndex of 32 is : 4\nIndex of 44 is :-7\n \nThis method is used to search a value in the range of elements in a 1-D sorted array using a specified IComparer interface. \nSyntax: public static int BinarySearch(Array arr, int index, int length, Object value, IComparer comparer)Parameters: arr : The sorted one-dimensional Array which is to be searched. index : The starting index of the range from which searching will start. length : The length of the range in which the search will happen. value : The object to search for. comparer : When comparing elements then use the IComparer implementation. \nReturn Value: It returns the index of the specified value in the specified arr, if the value is found otherwise it returns a negative number. There are different cases of return values as follows: \nIf the value is not found and value is less than one or more elements in the array, the negative number returned is the bitwise complement of the index of the first element that is larger than value.\nIf the value is not found and value is greater than all elements in the array, the negative number returned is the bitwise complement of (the index of the last element plus 1).\nIf this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the value is present in the array.\nExample: In this example, here we use “CreateInstance()” method to create a typed array and stores some integer value and search some values after sort the array. \nC#\n// C# program to demonstrate the// Array.BinarySearch(Array,// Int32, Int32, Object,// IComparer) Methodusing System; class GFG { // Main Method public static void Main() { // initializes a new Array. Array arr = Array.CreateInstance(typeof(Int32), 8); // Array elements arr.SetValue(20, 0); arr.SetValue(10, 1); arr.SetValue(30, 2); arr.SetValue(40, 3); arr.SetValue(50, 4); arr.SetValue(80, 5); arr.SetValue(70, 6); arr.SetValue(60, 7); Console.WriteLine(\"The original Array\"); // calling \"display\" function display(arr); Console.WriteLine(\"\\nsorted array\"); // sorting the Array Array.Sort(arr); display(arr); Console.WriteLine(\"\\n1st call\"); // search for object 10 object obj1 = 10; // call the \"FindObj\" function FindObj(arr, obj1); Console.WriteLine(\"\\n2nd call\"); object obj2 = 60; FindObj(arr, obj2); } // find object method public static void FindObj(Array Arr, object Obj) { int index = Array.BinarySearch(Arr, 1, 4, Obj, StringComparer.CurrentCulture); if (index < 0) { Console.WriteLine(\"The object {0} is not found\\n\" + \"Next larger object is at index {1}\", Obj, ~index); } else { Console.WriteLine(\"The object {0} is at \" + \"index {1}\", Obj, index); } } // display method public static void display(Array arr) { foreach(int g in arr) { Console.WriteLine(g); } }}\nThe original Array\n20\n10\n30\n40\n50\n80\n70\n60\n\nsorted array\n10\n20\n30\n40\n50\n60\n70\n80\n\n1st call\nThe object 10 is not found\nNext larger object is at index 1\n\n2nd call\nThe object 60 is not found\nNext larger object is at index 5\n \narorakashish0911\nCSharp-Arrays\nCSharp-method\nC#\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nC# Dictionary with examples\nC# | Delegates\nC# | Method Overriding\nC# | Abstract Classes\nDifference between Ref and Out keywords in C#\nExtension Method in C#\nC# | Class and Object\nC# | Constructors\nC# | String.IndexOf( ) Method | Set - 1\nC# | Replace() Method"},"parsed":{"kind":"list like","value":[{"code":null,"e":26337,"s":26309,"text":"\n29 May, 2021"},{"code":null,"e":26849,"s":26337,"text":"Array.BinarySearch() method is used to search a value in a sorted one dimensional array. The binary search algorithm is used by this method. This algorithm searches a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise, narrow it to the upper half. Repeatedly check until the value is found or the interval is empty."},{"code":null,"e":26867,"s":26849,"text":"Important Points:"},{"code":null,"e":26921,"s":26867,"text":"Before calling this method, the array must be sorted."},{"code":null,"e":27016,"s":26921,"text":"This method will return the negative integer if the array doesn’t contain the specified value."},{"code":null,"e":27087,"s":27016,"text":"The array must be one-dimensional otherwise this method can’t be used."},{"code":null,"e":27177,"s":27087,"text":"The Icomparable interface must be implemented by the value or every element of the array."},{"code":null,"e":27362,"s":27177,"text":"The method will return the index of only one of the occurrences if more than one matched elements found in the array and it is not necessary that index will be of the first occurrence."},{"code":null,"e":27436,"s":27362,"text":"There are total 8 methods in the overload list of this method as follows:"},{"code":null,"e":27464,"s":27436,"text":"BinarySearch(Array, Object)"},{"code":null,"e":27503,"s":27464,"text":"BinarySearch(Array, Object, IComparer)"},{"code":null,"e":27545,"s":27503,"text":"BinarySearch(Array, Int32, Int32, Object)"},{"code":null,"e":27598,"s":27545,"text":"BinarySearch(Array, Int32, Int32, Object, IComparer)"},{"code":null,"e":27622,"s":27598,"text":"BinarySearch(T[], T)"},{"code":null,"e":27660,"s":27622,"text":"BinarySearch(T[], T, IComparer)"},{"code":null,"e":27698,"s":27660,"text":"BinarySearch(T[], Int32, Int32, T)"},{"code":null,"e":27750,"s":27698,"text":"BinarySearch(T[], Int32, Int32, T, IComparer)"},{"code":null,"e":28029,"s":27750,"text":"This method is used to search a specific element in the entire 1-D sorted array. It used the IComparable interface that is implemented by each element of the 1-D array and the specified object. This method is an O(log n) operation, where n is the Length of the specified array. "},{"code":null,"e":28186,"s":28029,"text":"Syntax: public static int BinarySearch (Array arr, object val);Parameters: arr: It is the sorted 1-D array to search. val: It is the object to search for. "},{"code":null,"e":28377,"s":28186,"text":"Return Value: It returns the index of the specified valin the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows:"},{"code":null,"e":28568,"s":28377,"text":"If the val is not found and valis less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val."},{"code":null,"e":28739,"s":28568,"text":"If the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1)."},{"code":null,"e":28903,"s":28739,"text":"If this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr."},{"code":null,"e":28916,"s":28903,"text":"Exceptions: "},{"code":null,"e":28959,"s":28916,"text":"ArgumentNullException: If the arr is null."},{"code":null,"e":29006,"s":28959,"text":"RankException: If the arr is multidimensional."},{"code":null,"e":29103,"s":29006,"text":"ArgumentException: If the val is of a type which is not compatible with the elements of the arr."},{"code":null,"e":29275,"s":29103,"text":"InvalidOperationException: If the val does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface."},{"code":null,"e":29329,"s":29275,"text":"Below programs illustrate the above-discussed method:"},{"code":null,"e":29341,"s":29329,"text":"Example 1: "},{"code":null,"e":29344,"s":29341,"text":"C#"},{"code":"// C# program to illustrate the// Array.BinarySearch(Array, Object)// Methodusing System; class GFG { // Main Method public static void Main(String[] args) { // taking an 1-D Array int[] arr = new int[7] { 1, 5, 7, 4, 6, 2, 3 }; // for this method array // must be sorted Array.Sort(arr); Console.Write(\"The elements of Sorted Array: \"); // calling the method to // print the values display(arr); // taking the element which is // to search for in a variable // It is not present in the array object s = 8; // calling the method containing // BinarySearch method result(arr, s); // taking the element which is // to search for in a variable // It is present in the array object s1 = 4; // calling the method containing // BinarySearch method result(arr, s1); } // containing BinarySearch Method static void result(int[] arr2, object k) { // using the method int res = Array.BinarySearch(arr2, k); if (res < 0) { Console.WriteLine(\"\\nThe element to search for \" + \"({0}) is not found.\", k); } else { Console.WriteLine(\"The element to search for \" + \"({0}) is at index {1}.\", k, res); } } // display method static void display(int[] arr1) { // Displaying Elements of array foreach(int i in arr1) Console.Write(i + \" \"); }}","e":30990,"s":29344,"text":null},{"code":null,"e":31124,"s":30990,"text":"The elements of Sorted Array: 1 2 3 4 5 6 7 \nThe element to search for (8) is not found.\nThe element to search for (4) is at index 3."},{"code":null,"e":31137,"s":31126,"text":"Example 2:"},{"code":null,"e":31140,"s":31137,"text":"C#"},{"code":"// C# program to illustrate the// Array.BinarySearch(Array, Object)// Methodusing System; class GFG { // Main Method public static void Main(String[] args) { // taking an 1-D Array int[] arr = new int[7] { 1, 5, 7, 4, 6, 2, 3 }; // for this method array // must be sorted Array.Sort(arr); Console.Write(\"The elements of Sorted Array: \"); // calling the method to // print the values display(arr); // it will return a negative value as // 9 is not present in the array Console.WriteLine(\"\\nIndex of 9 is: \" + Array.BinarySearch(arr, 8)); } // display method static void display(int[] arr1) { // Displaying Elements of array foreach(int i in arr1) Console.Write(i + \" \"); }}","e":31954,"s":31140,"text":null},{"code":null,"e":32017,"s":31954,"text":"The elements of Sorted Array: 1 2 3 4 5 6 7 \nIndex of 9 is: -8"},{"code":null,"e":32140,"s":32019,"text":"This method is used to search a specific element in the entire 1-D sorted array using the specified IComparer interface."},{"code":null,"e":32432,"s":32140,"text":"Syntax: public static int BinarySearch(Array arr, Object val, IComparer comparer)Parameters: arr : The one-dimensional sorted array in which the search will happen. val : The object value which is to search for. comparer : When comparing elements then the IComparer implementation is used. "},{"code":null,"e":32625,"s":32432,"text":"Return Value: It returns the index of the specified val in the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows: "},{"code":null,"e":32817,"s":32625,"text":"If the val is not found and val is less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val."},{"code":null,"e":32988,"s":32817,"text":"If the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1)."},{"code":null,"e":33152,"s":32988,"text":"If this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr."},{"code":null,"e":33166,"s":33152,"text":"Exceptions: "},{"code":null,"e":33209,"s":33166,"text":"ArgumentNullException: If the arr is null."},{"code":null,"e":33252,"s":33209,"text":"RankException: If arr is multidimensional."},{"code":null,"e":33335,"s":33252,"text":"ArgumentException: If the range is less than lower bound OR length is less than 0."},{"code":null,"e":33451,"s":33335,"text":"ArgumentException: If the comparer is null, and value is of a type that is not compatible with the elements of arr."},{"code":null,"e":33643,"s":33451,"text":"InvalidOperationException: If the comparer is null, value does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface."},{"code":null,"e":33654,"s":33643,"text":"Example: "},{"code":null,"e":33657,"s":33654,"text":"C#"},{"code":"// C# program to demonstrate the// Array.BinarySearch(Array,// Object, IComparer) Methodusing System; class GFG { // Main Method public static void Main() { // initializes a new Array. Array arr = Array.CreateInstance(typeof(Int32), 5); // Array elements arr.SetValue(20, 0); arr.SetValue(10, 1); arr.SetValue(30, 2); arr.SetValue(40, 3); arr.SetValue(50, 4); Console.WriteLine(\"The original Array\"); // calling \"display\" function display(arr); Console.WriteLine(\"\\nsorted array\"); // sorting the Array Array.Sort(arr); display(arr); Console.WriteLine(\"\\n1st call\"); // search for object 10 object obj1 = 10; // call the \"FindObj\" function FindObj(arr, obj1); Console.WriteLine(\"\\n2nd call\"); object obj2 = 60; FindObj(arr, obj2); } // find object method public static void FindObj(Array Arr, object Obj) { int index = Array.BinarySearch(Arr, Obj, StringComparer.CurrentCulture); if (index < 0) { Console.WriteLine(\"The object {0} is not found\\nNext\" + \" larger object is at index {1}\", Obj, ~index); } else { Console.WriteLine(\"The object {0} is at index {1}\", Obj, index); } } // display method public static void display(Array arr) { foreach(int g in arr) { Console.WriteLine(g); } }}","e":35305,"s":33657,"text":null},{"code":null,"e":35476,"s":35305,"text":"The original Array\n20\n10\n30\n40\n50\n\nsorted array\n10\n20\n30\n40\n50\n\n1st call\nThe object 10 is at index 0\n\n2nd call\nThe object 60 is not found\nNext larger object is at index 5"},{"code":null,"e":35735,"s":35478,"text":"This method is used to search a value in the range of elements in a 1-D sorted array. It uses the IComparable interface implemented by each element of the array and the specified value. It searches only in a specified boundary which is defined by the user."},{"code":null,"e":36104,"s":35735,"text":"Syntax: public static int BinarySearch(Array arr, int i, int len, object val);Parameters: arr: It is 1-D array in which the user have to search for an element. i: It is the starting index of the range from where the user want to start the search. len: It is the length of the range in which the user want to search. val: It is the value which the user to search for. "},{"code":null,"e":36297,"s":36104,"text":"Return Value: It returns the index of the specified val in the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows: "},{"code":null,"e":36489,"s":36297,"text":"If the val is not found and val is less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val."},{"code":null,"e":36660,"s":36489,"text":"If the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1)."},{"code":null,"e":36824,"s":36660,"text":"If this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr."},{"code":null,"e":36838,"s":36824,"text":"Exceptions: "},{"code":null,"e":36881,"s":36838,"text":"ArgumentNullException: If the arr is null."},{"code":null,"e":36924,"s":36881,"text":"RankException: If arr is multidimensional."},{"code":null,"e":37026,"s":36924,"text":"ArgumentOutOfRangeException: If the index is less than lower bound of array OR length is less than 0."},{"code":null,"e":37193,"s":37026,"text":"ArgumentException: If the index and length do not specify the valid range in array OR the value is of the type which is not compatible with the elements of the array."},{"code":null,"e":37363,"s":37193,"text":"InvalidOperationException: If value does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface."},{"code":null,"e":37373,"s":37363,"text":"Example: "},{"code":null,"e":37376,"s":37373,"text":"C#"},{"code":"// C# Program to illustrate the use of// Array.BinarySearch(Array, Int32,// Int32, Object) Methodusing System;using System.IO; class GFG { // Main Method static void Main() { // initializing the integer array int[] intArr = { 42, 5, 7, 12, 56, 1, 32 }; // sorts the intArray as it must be // sorted before using method Array.Sort(intArr); // printing the sorted array foreach(int i in intArr) Console.Write(i + \" \" + \"\\n\"); // intArr is the array we want to find // and 1 is the starting index // of the range to search. 5 is the // length of the range to search. // 32 is the object to search int index = Array.BinarySearch(intArr, 1, 5, 32); if (index >= 0) { // if the element is found it // returns the index of the element Console.WriteLine(\"Index of 32 is : \" + index); } else { // if the element is not // present in the array or // if it is not in the // specified range it prints this Console.Write(\"Element is not found\"); } // intArr is the array we want to // find. and 1 is the starting // index of the range to search. 5 is // the length of the range to search // 44 is the object to search int index1 = Array.BinarySearch(intArr, 1, 5, 44); // as the element is not present // it prints a negative value. Console.WriteLine(\"Index of 44 is :\" + index1); }}","e":38985,"s":37376,"text":null},{"code":null,"e":39048,"s":38985,"text":"1 \n5 \n7 \n12 \n32 \n42 \n56 \nIndex of 32 is : 4\nIndex of 44 is :-7"},{"code":null,"e":39175,"s":39050,"text":"This method is used to search a value in the range of elements in a 1-D sorted array using a specified IComparer interface. "},{"code":null,"e":39606,"s":39175,"text":"Syntax: public static int BinarySearch(Array arr, int index, int length, Object value, IComparer comparer)Parameters: arr : The sorted one-dimensional Array which is to be searched. index : The starting index of the range from which searching will start. length : The length of the range in which the search will happen. value : The object to search for. comparer : When comparing elements then use the IComparer implementation. "},{"code":null,"e":39804,"s":39606,"text":"Return Value: It returns the index of the specified value in the specified arr, if the value is found otherwise it returns a negative number. There are different cases of return values as follows: "},{"code":null,"e":40004,"s":39804,"text":"If the value is not found and value is less than one or more elements in the array, the negative number returned is the bitwise complement of the index of the first element that is larger than value."},{"code":null,"e":40181,"s":40004,"text":"If the value is not found and value is greater than all elements in the array, the negative number returned is the bitwise complement of (the index of the last element plus 1)."},{"code":null,"e":40349,"s":40181,"text":"If this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the value is present in the array."},{"code":null,"e":40513,"s":40349,"text":"Example: In this example, here we use “CreateInstance()” method to create a typed array and stores some integer value and search some values after sort the array. "},{"code":null,"e":40516,"s":40513,"text":"C#"},{"code":"// C# program to demonstrate the// Array.BinarySearch(Array,// Int32, Int32, Object,// IComparer) Methodusing System; class GFG { // Main Method public static void Main() { // initializes a new Array. Array arr = Array.CreateInstance(typeof(Int32), 8); // Array elements arr.SetValue(20, 0); arr.SetValue(10, 1); arr.SetValue(30, 2); arr.SetValue(40, 3); arr.SetValue(50, 4); arr.SetValue(80, 5); arr.SetValue(70, 6); arr.SetValue(60, 7); Console.WriteLine(\"The original Array\"); // calling \"display\" function display(arr); Console.WriteLine(\"\\nsorted array\"); // sorting the Array Array.Sort(arr); display(arr); Console.WriteLine(\"\\n1st call\"); // search for object 10 object obj1 = 10; // call the \"FindObj\" function FindObj(arr, obj1); Console.WriteLine(\"\\n2nd call\"); object obj2 = 60; FindObj(arr, obj2); } // find object method public static void FindObj(Array Arr, object Obj) { int index = Array.BinarySearch(Arr, 1, 4, Obj, StringComparer.CurrentCulture); if (index < 0) { Console.WriteLine(\"The object {0} is not found\\n\" + \"Next larger object is at index {1}\", Obj, ~index); } else { Console.WriteLine(\"The object {0} is at \" + \"index {1}\", Obj, index); } } // display method public static void display(Array arr) { foreach(int g in arr) { Console.WriteLine(g); } }}","e":42307,"s":40516,"text":null},{"code":null,"e":42528,"s":42307,"text":"The original Array\n20\n10\n30\n40\n50\n80\n70\n60\n\nsorted array\n10\n20\n30\n40\n50\n60\n70\n80\n\n1st call\nThe object 10 is not found\nNext larger object is at index 1\n\n2nd call\nThe object 60 is not found\nNext larger object is at index 5"},{"code":null,"e":42547,"s":42530,"text":"arorakashish0911"},{"code":null,"e":42561,"s":42547,"text":"CSharp-Arrays"},{"code":null,"e":42575,"s":42561,"text":"CSharp-method"},{"code":null,"e":42578,"s":42575,"text":"C#"},{"code":null,"e":42676,"s":42578,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":42704,"s":42676,"text":"C# Dictionary with examples"},{"code":null,"e":42719,"s":42704,"text":"C# | Delegates"},{"code":null,"e":42742,"s":42719,"text":"C# | Method Overriding"},{"code":null,"e":42764,"s":42742,"text":"C# | Abstract Classes"},{"code":null,"e":42810,"s":42764,"text":"Difference between Ref and Out keywords in C#"},{"code":null,"e":42833,"s":42810,"text":"Extension Method in C#"},{"code":null,"e":42855,"s":42833,"text":"C# | Class and Object"},{"code":null,"e":42873,"s":42855,"text":"C# | Constructors"},{"code":null,"e":42913,"s":42873,"text":"C# | String.IndexOf( ) Method | Set - 1"}],"string":"[\n {\n \"code\": null,\n \"e\": 26337,\n \"s\": 26309,\n \"text\": \"\\n29 May, 2021\"\n },\n {\n \"code\": null,\n \"e\": 26849,\n \"s\": 26337,\n \"text\": \"Array.BinarySearch() method is used to search a value in a sorted one dimensional array. The binary search algorithm is used by this method. This algorithm searches a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise, narrow it to the upper half. Repeatedly check until the value is found or the interval is empty.\"\n },\n {\n \"code\": null,\n \"e\": 26867,\n \"s\": 26849,\n \"text\": \"Important Points:\"\n },\n {\n \"code\": null,\n \"e\": 26921,\n \"s\": 26867,\n \"text\": \"Before calling this method, the array must be sorted.\"\n },\n {\n \"code\": null,\n \"e\": 27016,\n \"s\": 26921,\n \"text\": \"This method will return the negative integer if the array doesn’t contain the specified value.\"\n },\n {\n \"code\": null,\n \"e\": 27087,\n \"s\": 27016,\n \"text\": \"The array must be one-dimensional otherwise this method can’t be used.\"\n },\n {\n \"code\": null,\n \"e\": 27177,\n \"s\": 27087,\n \"text\": \"The Icomparable interface must be implemented by the value or every element of the array.\"\n },\n {\n \"code\": null,\n \"e\": 27362,\n \"s\": 27177,\n \"text\": \"The method will return the index of only one of the occurrences if more than one matched elements found in the array and it is not necessary that index will be of the first occurrence.\"\n },\n {\n \"code\": null,\n \"e\": 27436,\n \"s\": 27362,\n \"text\": \"There are total 8 methods in the overload list of this method as follows:\"\n },\n {\n \"code\": null,\n \"e\": 27464,\n \"s\": 27436,\n \"text\": \"BinarySearch(Array, Object)\"\n },\n {\n \"code\": null,\n \"e\": 27503,\n \"s\": 27464,\n \"text\": \"BinarySearch(Array, Object, IComparer)\"\n },\n {\n \"code\": null,\n \"e\": 27545,\n \"s\": 27503,\n \"text\": \"BinarySearch(Array, Int32, Int32, Object)\"\n },\n {\n \"code\": null,\n \"e\": 27598,\n \"s\": 27545,\n \"text\": \"BinarySearch(Array, Int32, Int32, Object, IComparer)\"\n },\n {\n \"code\": null,\n \"e\": 27622,\n \"s\": 27598,\n \"text\": \"BinarySearch(T[], T)\"\n },\n {\n \"code\": null,\n \"e\": 27660,\n \"s\": 27622,\n \"text\": \"BinarySearch(T[], T, IComparer)\"\n },\n {\n \"code\": null,\n \"e\": 27698,\n \"s\": 27660,\n \"text\": \"BinarySearch(T[], Int32, Int32, T)\"\n },\n {\n \"code\": null,\n \"e\": 27750,\n \"s\": 27698,\n \"text\": \"BinarySearch(T[], Int32, Int32, T, IComparer)\"\n },\n {\n \"code\": null,\n \"e\": 28029,\n \"s\": 27750,\n \"text\": \"This method is used to search a specific element in the entire 1-D sorted array. It used the IComparable interface that is implemented by each element of the 1-D array and the specified object. This method is an O(log n) operation, where n is the Length of the specified array. \"\n },\n {\n \"code\": null,\n \"e\": 28186,\n \"s\": 28029,\n \"text\": \"Syntax: public static int BinarySearch (Array arr, object val);Parameters: arr: It is the sorted 1-D array to search. val: It is the object to search for. \"\n },\n {\n \"code\": null,\n \"e\": 28377,\n \"s\": 28186,\n \"text\": \"Return Value: It returns the index of the specified valin the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows:\"\n },\n {\n \"code\": null,\n \"e\": 28568,\n \"s\": 28377,\n \"text\": \"If the val is not found and valis less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val.\"\n },\n {\n \"code\": null,\n \"e\": 28739,\n \"s\": 28568,\n \"text\": \"If the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1).\"\n },\n {\n \"code\": null,\n \"e\": 28903,\n \"s\": 28739,\n \"text\": \"If this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr.\"\n },\n {\n \"code\": null,\n \"e\": 28916,\n \"s\": 28903,\n \"text\": \"Exceptions: \"\n },\n {\n \"code\": null,\n \"e\": 28959,\n \"s\": 28916,\n \"text\": \"ArgumentNullException: If the arr is null.\"\n },\n {\n \"code\": null,\n \"e\": 29006,\n \"s\": 28959,\n \"text\": \"RankException: If the arr is multidimensional.\"\n },\n {\n \"code\": null,\n \"e\": 29103,\n \"s\": 29006,\n \"text\": \"ArgumentException: If the val is of a type which is not compatible with the elements of the arr.\"\n },\n {\n \"code\": null,\n \"e\": 29275,\n \"s\": 29103,\n \"text\": \"InvalidOperationException: If the val does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface.\"\n },\n {\n \"code\": null,\n \"e\": 29329,\n \"s\": 29275,\n \"text\": \"Below programs illustrate the above-discussed method:\"\n },\n {\n \"code\": null,\n \"e\": 29341,\n \"s\": 29329,\n \"text\": \"Example 1: \"\n },\n {\n \"code\": null,\n \"e\": 29344,\n \"s\": 29341,\n \"text\": \"C#\"\n },\n {\n \"code\": \"// C# program to illustrate the// Array.BinarySearch(Array, Object)// Methodusing System; class GFG { // Main Method public static void Main(String[] args) { // taking an 1-D Array int[] arr = new int[7] { 1, 5, 7, 4, 6, 2, 3 }; // for this method array // must be sorted Array.Sort(arr); Console.Write(\\\"The elements of Sorted Array: \\\"); // calling the method to // print the values display(arr); // taking the element which is // to search for in a variable // It is not present in the array object s = 8; // calling the method containing // BinarySearch method result(arr, s); // taking the element which is // to search for in a variable // It is present in the array object s1 = 4; // calling the method containing // BinarySearch method result(arr, s1); } // containing BinarySearch Method static void result(int[] arr2, object k) { // using the method int res = Array.BinarySearch(arr2, k); if (res < 0) { Console.WriteLine(\\\"\\\\nThe element to search for \\\" + \\\"({0}) is not found.\\\", k); } else { Console.WriteLine(\\\"The element to search for \\\" + \\\"({0}) is at index {1}.\\\", k, res); } } // display method static void display(int[] arr1) { // Displaying Elements of array foreach(int i in arr1) Console.Write(i + \\\" \\\"); }}\",\n \"e\": 30990,\n \"s\": 29344,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 31124,\n \"s\": 30990,\n \"text\": \"The elements of Sorted Array: 1 2 3 4 5 6 7 \\nThe element to search for (8) is not found.\\nThe element to search for (4) is at index 3.\"\n },\n {\n \"code\": null,\n \"e\": 31137,\n \"s\": 31126,\n \"text\": \"Example 2:\"\n },\n {\n \"code\": null,\n \"e\": 31140,\n \"s\": 31137,\n \"text\": \"C#\"\n },\n {\n \"code\": \"// C# program to illustrate the// Array.BinarySearch(Array, Object)// Methodusing System; class GFG { // Main Method public static void Main(String[] args) { // taking an 1-D Array int[] arr = new int[7] { 1, 5, 7, 4, 6, 2, 3 }; // for this method array // must be sorted Array.Sort(arr); Console.Write(\\\"The elements of Sorted Array: \\\"); // calling the method to // print the values display(arr); // it will return a negative value as // 9 is not present in the array Console.WriteLine(\\\"\\\\nIndex of 9 is: \\\" + Array.BinarySearch(arr, 8)); } // display method static void display(int[] arr1) { // Displaying Elements of array foreach(int i in arr1) Console.Write(i + \\\" \\\"); }}\",\n \"e\": 31954,\n \"s\": 31140,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 32017,\n \"s\": 31954,\n \"text\": \"The elements of Sorted Array: 1 2 3 4 5 6 7 \\nIndex of 9 is: -8\"\n },\n {\n \"code\": null,\n \"e\": 32140,\n \"s\": 32019,\n \"text\": \"This method is used to search a specific element in the entire 1-D sorted array using the specified IComparer interface.\"\n },\n {\n \"code\": null,\n \"e\": 32432,\n \"s\": 32140,\n \"text\": \"Syntax: public static int BinarySearch(Array arr, Object val, IComparer comparer)Parameters: arr : The one-dimensional sorted array in which the search will happen. val : The object value which is to search for. comparer : When comparing elements then the IComparer implementation is used. \"\n },\n {\n \"code\": null,\n \"e\": 32625,\n \"s\": 32432,\n \"text\": \"Return Value: It returns the index of the specified val in the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows: \"\n },\n {\n \"code\": null,\n \"e\": 32817,\n \"s\": 32625,\n \"text\": \"If the val is not found and val is less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val.\"\n },\n {\n \"code\": null,\n \"e\": 32988,\n \"s\": 32817,\n \"text\": \"If the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1).\"\n },\n {\n \"code\": null,\n \"e\": 33152,\n \"s\": 32988,\n \"text\": \"If this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr.\"\n },\n {\n \"code\": null,\n \"e\": 33166,\n \"s\": 33152,\n \"text\": \"Exceptions: \"\n },\n {\n \"code\": null,\n \"e\": 33209,\n \"s\": 33166,\n \"text\": \"ArgumentNullException: If the arr is null.\"\n },\n {\n \"code\": null,\n \"e\": 33252,\n \"s\": 33209,\n \"text\": \"RankException: If arr is multidimensional.\"\n },\n {\n \"code\": null,\n \"e\": 33335,\n \"s\": 33252,\n \"text\": \"ArgumentException: If the range is less than lower bound OR length is less than 0.\"\n },\n {\n \"code\": null,\n \"e\": 33451,\n \"s\": 33335,\n \"text\": \"ArgumentException: If the comparer is null, and value is of a type that is not compatible with the elements of arr.\"\n },\n {\n \"code\": null,\n \"e\": 33643,\n \"s\": 33451,\n \"text\": \"InvalidOperationException: If the comparer is null, value does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface.\"\n },\n {\n \"code\": null,\n \"e\": 33654,\n \"s\": 33643,\n \"text\": \"Example: \"\n },\n {\n \"code\": null,\n \"e\": 33657,\n \"s\": 33654,\n \"text\": \"C#\"\n },\n {\n \"code\": \"// C# program to demonstrate the// Array.BinarySearch(Array,// Object, IComparer) Methodusing System; class GFG { // Main Method public static void Main() { // initializes a new Array. Array arr = Array.CreateInstance(typeof(Int32), 5); // Array elements arr.SetValue(20, 0); arr.SetValue(10, 1); arr.SetValue(30, 2); arr.SetValue(40, 3); arr.SetValue(50, 4); Console.WriteLine(\\\"The original Array\\\"); // calling \\\"display\\\" function display(arr); Console.WriteLine(\\\"\\\\nsorted array\\\"); // sorting the Array Array.Sort(arr); display(arr); Console.WriteLine(\\\"\\\\n1st call\\\"); // search for object 10 object obj1 = 10; // call the \\\"FindObj\\\" function FindObj(arr, obj1); Console.WriteLine(\\\"\\\\n2nd call\\\"); object obj2 = 60; FindObj(arr, obj2); } // find object method public static void FindObj(Array Arr, object Obj) { int index = Array.BinarySearch(Arr, Obj, StringComparer.CurrentCulture); if (index < 0) { Console.WriteLine(\\\"The object {0} is not found\\\\nNext\\\" + \\\" larger object is at index {1}\\\", Obj, ~index); } else { Console.WriteLine(\\\"The object {0} is at index {1}\\\", Obj, index); } } // display method public static void display(Array arr) { foreach(int g in arr) { Console.WriteLine(g); } }}\",\n \"e\": 35305,\n \"s\": 33657,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 35476,\n \"s\": 35305,\n \"text\": \"The original Array\\n20\\n10\\n30\\n40\\n50\\n\\nsorted array\\n10\\n20\\n30\\n40\\n50\\n\\n1st call\\nThe object 10 is at index 0\\n\\n2nd call\\nThe object 60 is not found\\nNext larger object is at index 5\"\n },\n {\n \"code\": null,\n \"e\": 35735,\n \"s\": 35478,\n \"text\": \"This method is used to search a value in the range of elements in a 1-D sorted array. It uses the IComparable interface implemented by each element of the array and the specified value. It searches only in a specified boundary which is defined by the user.\"\n },\n {\n \"code\": null,\n \"e\": 36104,\n \"s\": 35735,\n \"text\": \"Syntax: public static int BinarySearch(Array arr, int i, int len, object val);Parameters: arr: It is 1-D array in which the user have to search for an element. i: It is the starting index of the range from where the user want to start the search. len: It is the length of the range in which the user want to search. val: It is the value which the user to search for. \"\n },\n {\n \"code\": null,\n \"e\": 36297,\n \"s\": 36104,\n \"text\": \"Return Value: It returns the index of the specified val in the specified arr if the val is found otherwise it returns a negative number. There are different cases of return values as follows: \"\n },\n {\n \"code\": null,\n \"e\": 36489,\n \"s\": 36297,\n \"text\": \"If the val is not found and val is less than one or more elements in the arr, the negative number returned is the bitwise complement of the index of the first element that is larger than val.\"\n },\n {\n \"code\": null,\n \"e\": 36660,\n \"s\": 36489,\n \"text\": \"If the val is not found and val is greater than all elements in the arr, the negative number returned is the bitwise complement of (the index of the last element plus 1).\"\n },\n {\n \"code\": null,\n \"e\": 36824,\n \"s\": 36660,\n \"text\": \"If this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the val is present in the arr.\"\n },\n {\n \"code\": null,\n \"e\": 36838,\n \"s\": 36824,\n \"text\": \"Exceptions: \"\n },\n {\n \"code\": null,\n \"e\": 36881,\n \"s\": 36838,\n \"text\": \"ArgumentNullException: If the arr is null.\"\n },\n {\n \"code\": null,\n \"e\": 36924,\n \"s\": 36881,\n \"text\": \"RankException: If arr is multidimensional.\"\n },\n {\n \"code\": null,\n \"e\": 37026,\n \"s\": 36924,\n \"text\": \"ArgumentOutOfRangeException: If the index is less than lower bound of array OR length is less than 0.\"\n },\n {\n \"code\": null,\n \"e\": 37193,\n \"s\": 37026,\n \"text\": \"ArgumentException: If the index and length do not specify the valid range in array OR the value is of the type which is not compatible with the elements of the array.\"\n },\n {\n \"code\": null,\n \"e\": 37363,\n \"s\": 37193,\n \"text\": \"InvalidOperationException: If value does not implement the IComparable interface, and the search encounters an element that does not implement the IComparable interface.\"\n },\n {\n \"code\": null,\n \"e\": 37373,\n \"s\": 37363,\n \"text\": \"Example: \"\n },\n {\n \"code\": null,\n \"e\": 37376,\n \"s\": 37373,\n \"text\": \"C#\"\n },\n {\n \"code\": \"// C# Program to illustrate the use of// Array.BinarySearch(Array, Int32,// Int32, Object) Methodusing System;using System.IO; class GFG { // Main Method static void Main() { // initializing the integer array int[] intArr = { 42, 5, 7, 12, 56, 1, 32 }; // sorts the intArray as it must be // sorted before using method Array.Sort(intArr); // printing the sorted array foreach(int i in intArr) Console.Write(i + \\\" \\\" + \\\"\\\\n\\\"); // intArr is the array we want to find // and 1 is the starting index // of the range to search. 5 is the // length of the range to search. // 32 is the object to search int index = Array.BinarySearch(intArr, 1, 5, 32); if (index >= 0) { // if the element is found it // returns the index of the element Console.WriteLine(\\\"Index of 32 is : \\\" + index); } else { // if the element is not // present in the array or // if it is not in the // specified range it prints this Console.Write(\\\"Element is not found\\\"); } // intArr is the array we want to // find. and 1 is the starting // index of the range to search. 5 is // the length of the range to search // 44 is the object to search int index1 = Array.BinarySearch(intArr, 1, 5, 44); // as the element is not present // it prints a negative value. Console.WriteLine(\\\"Index of 44 is :\\\" + index1); }}\",\n \"e\": 38985,\n \"s\": 37376,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 39048,\n \"s\": 38985,\n \"text\": \"1 \\n5 \\n7 \\n12 \\n32 \\n42 \\n56 \\nIndex of 32 is : 4\\nIndex of 44 is :-7\"\n },\n {\n \"code\": null,\n \"e\": 39175,\n \"s\": 39050,\n \"text\": \"This method is used to search a value in the range of elements in a 1-D sorted array using a specified IComparer interface. \"\n },\n {\n \"code\": null,\n \"e\": 39606,\n \"s\": 39175,\n \"text\": \"Syntax: public static int BinarySearch(Array arr, int index, int length, Object value, IComparer comparer)Parameters: arr : The sorted one-dimensional Array which is to be searched. index : The starting index of the range from which searching will start. length : The length of the range in which the search will happen. value : The object to search for. comparer : When comparing elements then use the IComparer implementation. \"\n },\n {\n \"code\": null,\n \"e\": 39804,\n \"s\": 39606,\n \"text\": \"Return Value: It returns the index of the specified value in the specified arr, if the value is found otherwise it returns a negative number. There are different cases of return values as follows: \"\n },\n {\n \"code\": null,\n \"e\": 40004,\n \"s\": 39804,\n \"text\": \"If the value is not found and value is less than one or more elements in the array, the negative number returned is the bitwise complement of the index of the first element that is larger than value.\"\n },\n {\n \"code\": null,\n \"e\": 40181,\n \"s\": 40004,\n \"text\": \"If the value is not found and value is greater than all elements in the array, the negative number returned is the bitwise complement of (the index of the last element plus 1).\"\n },\n {\n \"code\": null,\n \"e\": 40349,\n \"s\": 40181,\n \"text\": \"If this method is called with a non-sorted array, the return value can be incorrect and a negative number could be returned, even if the value is present in the array.\"\n },\n {\n \"code\": null,\n \"e\": 40513,\n \"s\": 40349,\n \"text\": \"Example: In this example, here we use “CreateInstance()” method to create a typed array and stores some integer value and search some values after sort the array. \"\n },\n {\n \"code\": null,\n \"e\": 40516,\n \"s\": 40513,\n \"text\": \"C#\"\n },\n {\n \"code\": \"// C# program to demonstrate the// Array.BinarySearch(Array,// Int32, Int32, Object,// IComparer) Methodusing System; class GFG { // Main Method public static void Main() { // initializes a new Array. Array arr = Array.CreateInstance(typeof(Int32), 8); // Array elements arr.SetValue(20, 0); arr.SetValue(10, 1); arr.SetValue(30, 2); arr.SetValue(40, 3); arr.SetValue(50, 4); arr.SetValue(80, 5); arr.SetValue(70, 6); arr.SetValue(60, 7); Console.WriteLine(\\\"The original Array\\\"); // calling \\\"display\\\" function display(arr); Console.WriteLine(\\\"\\\\nsorted array\\\"); // sorting the Array Array.Sort(arr); display(arr); Console.WriteLine(\\\"\\\\n1st call\\\"); // search for object 10 object obj1 = 10; // call the \\\"FindObj\\\" function FindObj(arr, obj1); Console.WriteLine(\\\"\\\\n2nd call\\\"); object obj2 = 60; FindObj(arr, obj2); } // find object method public static void FindObj(Array Arr, object Obj) { int index = Array.BinarySearch(Arr, 1, 4, Obj, StringComparer.CurrentCulture); if (index < 0) { Console.WriteLine(\\\"The object {0} is not found\\\\n\\\" + \\\"Next larger object is at index {1}\\\", Obj, ~index); } else { Console.WriteLine(\\\"The object {0} is at \\\" + \\\"index {1}\\\", Obj, index); } } // display method public static void display(Array arr) { foreach(int g in arr) { Console.WriteLine(g); } }}\",\n \"e\": 42307,\n \"s\": 40516,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 42528,\n \"s\": 42307,\n \"text\": \"The original Array\\n20\\n10\\n30\\n40\\n50\\n80\\n70\\n60\\n\\nsorted array\\n10\\n20\\n30\\n40\\n50\\n60\\n70\\n80\\n\\n1st call\\nThe object 10 is not found\\nNext larger object is at index 1\\n\\n2nd call\\nThe object 60 is not found\\nNext larger object is at index 5\"\n },\n {\n \"code\": null,\n \"e\": 42547,\n \"s\": 42530,\n \"text\": \"arorakashish0911\"\n },\n {\n \"code\": null,\n \"e\": 42561,\n \"s\": 42547,\n \"text\": \"CSharp-Arrays\"\n },\n {\n \"code\": null,\n \"e\": 42575,\n \"s\": 42561,\n \"text\": \"CSharp-method\"\n },\n {\n \"code\": null,\n \"e\": 42578,\n \"s\": 42575,\n \"text\": \"C#\"\n },\n {\n \"code\": null,\n \"e\": 42676,\n \"s\": 42578,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 42704,\n \"s\": 42676,\n \"text\": \"C# Dictionary with examples\"\n },\n {\n \"code\": null,\n \"e\": 42719,\n \"s\": 42704,\n \"text\": \"C# | Delegates\"\n },\n {\n \"code\": null,\n \"e\": 42742,\n \"s\": 42719,\n \"text\": \"C# | Method Overriding\"\n },\n {\n \"code\": null,\n \"e\": 42764,\n \"s\": 42742,\n \"text\": \"C# | Abstract Classes\"\n },\n {\n \"code\": null,\n \"e\": 42810,\n \"s\": 42764,\n \"text\": \"Difference between Ref and Out keywords in C#\"\n },\n {\n \"code\": null,\n \"e\": 42833,\n \"s\": 42810,\n \"text\": \"Extension Method in C#\"\n },\n {\n \"code\": null,\n \"e\": 42855,\n \"s\": 42833,\n \"text\": \"C# | Class and Object\"\n },\n {\n \"code\": null,\n \"e\": 42873,\n \"s\": 42855,\n \"text\": \"C# | Constructors\"\n },\n {\n \"code\": null,\n \"e\": 42913,\n \"s\": 42873,\n \"text\": \"C# | String.IndexOf( ) Method | Set - 1\"\n }\n]"}}},{"rowIdx":536,"cells":{"title":{"kind":"string","value":"Recursive program to check if number is palindrome or not - GeeksforGeeks"},"text":{"kind":"string","value":"24 Feb, 2022\nGiven a number, the task is to write a recursive function which checks if the given number is palindrome or not. Examples: \nInput : 121\nOutput : yes\n\nInput : 532\nOutput : no\n \nThe approach for writing the function is to call the function recursively till the number is completely traversed from the back. Use a temp variable to store the reverse of the number according to the formula which has been obtained in this post. Pass the temp variable in the parameter and once the base case of n==0 is achieved, return temp which stores the reverse of a number. Below is the implementation of the above approach: \nC++\nJava\nPython3\nC#\nPHP\nJavascript\n// Recursive C++ program to check if the// number is palindrome or not#include using namespace std; // recursive function that returns the reverse of digitsint rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codeint main(){ int n = 121; int temp = rev(n, 0); if (temp == n) cout << \"yes\" << endl; else cout << \"no\" << endl; return 0;}\n// Recursive Java program to// check if the number is// palindrome or notimport java.io.*; class GFG{ // recursive function that// returns the reverse of digitsstatic int rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse // of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codepublic static void main (String[] args){ int n = 121; int temp = rev(n, 0); if (temp == n) System.out.println(\"yes\"); else System.out.println(\"no\" );}} // This code is contributed by anuj_67.\n# Recursive Python3 program to check# if the number is palindrome or not # Recursive function that returns# the reverse of digitsdef rev(n, temp): # base case if (n == 0): return temp; # stores the reverse of a number temp = (temp * 10) + (n % 10); return rev(n // 10, temp); # Driver Coden = 121;temp = rev(n, 0); if (temp == n): print(\"yes\")else: print(\"no\") # This code is contributed# by mits\n// Recursive C# program to// check if the number is// palindrome or notusing System; class GFG{ // recursive function// that returns the// reverse of digitsstatic int rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse // of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codepublic static void Main (){ int n = 121; int temp = rev(n, 0); if (temp == n) Console.WriteLine(\"yes\"); else Console.WriteLine(\"no\" );}} // This code is contributed// by anuj_67.\n\n\nyes\n \nvt_m\nSach_Code\nMithun Kumar\nmayanktyagi1709\namartyaghoshgfg\nnumber-digits\npalindrome\nMathematical\nMathematical\npalindrome\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nMerge two sorted arrays\nModulo Operator (%) in C/C++ with Examples\nPrime Numbers\nProgram to find GCD or HCF of two numbers\nSieve of Eratosthenes\nPrint all possible combinations of r elements in a given array of size n\nOperators in C / C++\nThe Knight's tour problem | Backtracking-1\nProgram for factorial of a number\nProgram for Decimal to Binary Conversion"},"parsed":{"kind":"list like","value":[{"code":null,"e":26115,"s":26087,"text":"\n24 Feb, 2022"},{"code":null,"e":26240,"s":26115,"text":"Given a number, the task is to write a recursive function which checks if the given number is palindrome or not. Examples: "},{"code":null,"e":26290,"s":26240,"text":"Input : 121\nOutput : yes\n\nInput : 532\nOutput : no"},{"code":null,"e":26726,"s":26292,"text":"The approach for writing the function is to call the function recursively till the number is completely traversed from the back. Use a temp variable to store the reverse of the number according to the formula which has been obtained in this post. Pass the temp variable in the parameter and once the base case of n==0 is achieved, return temp which stores the reverse of a number. Below is the implementation of the above approach: "},{"code":null,"e":26730,"s":26726,"text":"C++"},{"code":null,"e":26735,"s":26730,"text":"Java"},{"code":null,"e":26743,"s":26735,"text":"Python3"},{"code":null,"e":26746,"s":26743,"text":"C#"},{"code":null,"e":26750,"s":26746,"text":"PHP"},{"code":null,"e":26761,"s":26750,"text":"Javascript"},{"code":"// Recursive C++ program to check if the// number is palindrome or not#include using namespace std; // recursive function that returns the reverse of digitsint rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codeint main(){ int n = 121; int temp = rev(n, 0); if (temp == n) cout << \"yes\" << endl; else cout << \"no\" << endl; return 0;}","e":27287,"s":26761,"text":null},{"code":"// Recursive Java program to// check if the number is// palindrome or notimport java.io.*; class GFG{ // recursive function that// returns the reverse of digitsstatic int rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse // of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codepublic static void main (String[] args){ int n = 121; int temp = rev(n, 0); if (temp == n) System.out.println(\"yes\"); else System.out.println(\"no\" );}} // This code is contributed by anuj_67.","e":27877,"s":27287,"text":null},{"code":"# Recursive Python3 program to check# if the number is palindrome or not # Recursive function that returns# the reverse of digitsdef rev(n, temp): # base case if (n == 0): return temp; # stores the reverse of a number temp = (temp * 10) + (n % 10); return rev(n // 10, temp); # Driver Coden = 121;temp = rev(n, 0); if (temp == n): print(\"yes\")else: print(\"no\") # This code is contributed# by mits","e":28305,"s":27877,"text":null},{"code":"// Recursive C# program to// check if the number is// palindrome or notusing System; class GFG{ // recursive function// that returns the// reverse of digitsstatic int rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse // of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codepublic static void Main (){ int n = 121; int temp = rev(n, 0); if (temp == n) Console.WriteLine(\"yes\"); else Console.WriteLine(\"no\" );}} // This code is contributed// by anuj_67.","e":28906,"s":28305,"text":null},{"code":"","e":29364,"s":28906,"text":null},{"code":"","e":29920,"s":29364,"text":null},{"code":null,"e":29924,"s":29920,"text":"yes"},{"code":null,"e":29931,"s":29926,"text":"vt_m"},{"code":null,"e":29941,"s":29931,"text":"Sach_Code"},{"code":null,"e":29954,"s":29941,"text":"Mithun Kumar"},{"code":null,"e":29970,"s":29954,"text":"mayanktyagi1709"},{"code":null,"e":29986,"s":29970,"text":"amartyaghoshgfg"},{"code":null,"e":30000,"s":29986,"text":"number-digits"},{"code":null,"e":30011,"s":30000,"text":"palindrome"},{"code":null,"e":30024,"s":30011,"text":"Mathematical"},{"code":null,"e":30037,"s":30024,"text":"Mathematical"},{"code":null,"e":30048,"s":30037,"text":"palindrome"},{"code":null,"e":30146,"s":30048,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":30170,"s":30146,"text":"Merge two sorted arrays"},{"code":null,"e":30213,"s":30170,"text":"Modulo Operator (%) in C/C++ with Examples"},{"code":null,"e":30227,"s":30213,"text":"Prime Numbers"},{"code":null,"e":30269,"s":30227,"text":"Program to find GCD or HCF of two numbers"},{"code":null,"e":30291,"s":30269,"text":"Sieve of Eratosthenes"},{"code":null,"e":30364,"s":30291,"text":"Print all possible combinations of r elements in a given array of size n"},{"code":null,"e":30385,"s":30364,"text":"Operators in C / C++"},{"code":null,"e":30428,"s":30385,"text":"The Knight's tour problem | Backtracking-1"},{"code":null,"e":30462,"s":30428,"text":"Program for factorial of a number"}],"string":"[\n {\n \"code\": null,\n \"e\": 26115,\n \"s\": 26087,\n \"text\": \"\\n24 Feb, 2022\"\n },\n {\n \"code\": null,\n \"e\": 26240,\n \"s\": 26115,\n \"text\": \"Given a number, the task is to write a recursive function which checks if the given number is palindrome or not. Examples: \"\n },\n {\n \"code\": null,\n \"e\": 26290,\n \"s\": 26240,\n \"text\": \"Input : 121\\nOutput : yes\\n\\nInput : 532\\nOutput : no\"\n },\n {\n \"code\": null,\n \"e\": 26726,\n \"s\": 26292,\n \"text\": \"The approach for writing the function is to call the function recursively till the number is completely traversed from the back. Use a temp variable to store the reverse of the number according to the formula which has been obtained in this post. Pass the temp variable in the parameter and once the base case of n==0 is achieved, return temp which stores the reverse of a number. Below is the implementation of the above approach: \"\n },\n {\n \"code\": null,\n \"e\": 26730,\n \"s\": 26726,\n \"text\": \"C++\"\n },\n {\n \"code\": null,\n \"e\": 26735,\n \"s\": 26730,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 26743,\n \"s\": 26735,\n \"text\": \"Python3\"\n },\n {\n \"code\": null,\n \"e\": 26746,\n \"s\": 26743,\n \"text\": \"C#\"\n },\n {\n \"code\": null,\n \"e\": 26750,\n \"s\": 26746,\n \"text\": \"PHP\"\n },\n {\n \"code\": null,\n \"e\": 26761,\n \"s\": 26750,\n \"text\": \"Javascript\"\n },\n {\n \"code\": \"// Recursive C++ program to check if the// number is palindrome or not#include using namespace std; // recursive function that returns the reverse of digitsint rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codeint main(){ int n = 121; int temp = rev(n, 0); if (temp == n) cout << \\\"yes\\\" << endl; else cout << \\\"no\\\" << endl; return 0;}\",\n \"e\": 27287,\n \"s\": 26761,\n \"text\": null\n },\n {\n \"code\": \"// Recursive Java program to// check if the number is// palindrome or notimport java.io.*; class GFG{ // recursive function that// returns the reverse of digitsstatic int rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse // of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codepublic static void main (String[] args){ int n = 121; int temp = rev(n, 0); if (temp == n) System.out.println(\\\"yes\\\"); else System.out.println(\\\"no\\\" );}} // This code is contributed by anuj_67.\",\n \"e\": 27877,\n \"s\": 27287,\n \"text\": null\n },\n {\n \"code\": \"# Recursive Python3 program to check# if the number is palindrome or not # Recursive function that returns# the reverse of digitsdef rev(n, temp): # base case if (n == 0): return temp; # stores the reverse of a number temp = (temp * 10) + (n % 10); return rev(n // 10, temp); # Driver Coden = 121;temp = rev(n, 0); if (temp == n): print(\\\"yes\\\")else: print(\\\"no\\\") # This code is contributed# by mits\",\n \"e\": 28305,\n \"s\": 27877,\n \"text\": null\n },\n {\n \"code\": \"// Recursive C# program to// check if the number is// palindrome or notusing System; class GFG{ // recursive function// that returns the// reverse of digitsstatic int rev(int n, int temp){ // base case if (n == 0) return temp; // stores the reverse // of a number temp = (temp * 10) + (n % 10); return rev(n / 10, temp);} // Driver Codepublic static void Main (){ int n = 121; int temp = rev(n, 0); if (temp == n) Console.WriteLine(\\\"yes\\\"); else Console.WriteLine(\\\"no\\\" );}} // This code is contributed// by anuj_67.\",\n \"e\": 28906,\n \"s\": 28305,\n \"text\": null\n },\n {\n \"code\": \"\",\n \"e\": 29364,\n \"s\": 28906,\n \"text\": null\n },\n {\n \"code\": \"\",\n \"e\": 29920,\n \"s\": 29364,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 29924,\n \"s\": 29920,\n \"text\": \"yes\"\n },\n {\n \"code\": null,\n \"e\": 29931,\n \"s\": 29926,\n \"text\": \"vt_m\"\n },\n {\n \"code\": null,\n \"e\": 29941,\n \"s\": 29931,\n \"text\": \"Sach_Code\"\n },\n {\n \"code\": null,\n \"e\": 29954,\n \"s\": 29941,\n \"text\": \"Mithun Kumar\"\n },\n {\n \"code\": null,\n \"e\": 29970,\n \"s\": 29954,\n \"text\": \"mayanktyagi1709\"\n },\n {\n \"code\": null,\n \"e\": 29986,\n \"s\": 29970,\n \"text\": \"amartyaghoshgfg\"\n },\n {\n \"code\": null,\n \"e\": 30000,\n \"s\": 29986,\n \"text\": \"number-digits\"\n },\n {\n \"code\": null,\n \"e\": 30011,\n \"s\": 30000,\n \"text\": \"palindrome\"\n },\n {\n \"code\": null,\n \"e\": 30024,\n \"s\": 30011,\n \"text\": \"Mathematical\"\n },\n {\n \"code\": null,\n \"e\": 30037,\n \"s\": 30024,\n \"text\": \"Mathematical\"\n },\n {\n \"code\": null,\n \"e\": 30048,\n \"s\": 30037,\n \"text\": \"palindrome\"\n },\n {\n \"code\": null,\n \"e\": 30146,\n \"s\": 30048,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 30170,\n \"s\": 30146,\n \"text\": \"Merge two sorted arrays\"\n },\n {\n \"code\": null,\n \"e\": 30213,\n \"s\": 30170,\n \"text\": \"Modulo Operator (%) in C/C++ with Examples\"\n },\n {\n \"code\": null,\n \"e\": 30227,\n \"s\": 30213,\n \"text\": \"Prime Numbers\"\n },\n {\n \"code\": null,\n \"e\": 30269,\n \"s\": 30227,\n \"text\": \"Program to find GCD or HCF of two numbers\"\n },\n {\n \"code\": null,\n \"e\": 30291,\n \"s\": 30269,\n \"text\": \"Sieve of Eratosthenes\"\n },\n {\n \"code\": null,\n \"e\": 30364,\n \"s\": 30291,\n \"text\": \"Print all possible combinations of r elements in a given array of size n\"\n },\n {\n \"code\": null,\n \"e\": 30385,\n \"s\": 30364,\n \"text\": \"Operators in C / C++\"\n },\n {\n \"code\": null,\n \"e\": 30428,\n \"s\": 30385,\n \"text\": \"The Knight's tour problem | Backtracking-1\"\n },\n {\n \"code\": null,\n \"e\": 30462,\n \"s\": 30428,\n \"text\": \"Program for factorial of a number\"\n }\n]"}}},{"rowIdx":537,"cells":{"title":{"kind":"string","value":"How to create a table in ReactJS ? - GeeksforGeeks"},"text":{"kind":"string","value":"27 Oct, 2021\nIn this article, we will create a simple table in React.js just like you would create in a normal HTML project. Also, we will style it using normal CSS. \nPrerequisites: The pre-requisites for this project are:\nReact\nFunctional Components\nJavaScript ES6\nHTML Tables & CSS\nCreating a React application:\nStep 1: Create a react application by typing the following command in the terminal.\nnpx create-react-app react-table\nStep 2: Now, go to the project folder i.e react-table by running the following command.\ncd react-table\nProject Structure: It will look like the following:\nExample 1: Here App.js is the default component. At first, we will see how to create a table using the hardcoded values. Later we will see how to dynamically render the data from an array inside the table. \nFilename: App.js\nJavascript\nimport './App.css'; function App() { return (
Name Age Gender
Anom 19 Male
Megha 19 Female
Subham 25 Male
);} export default App;\nIn the above example, we just simply used the HTML table elements which are , ,
, and elements. \nExample 2: Now lets us see how we can dynamically render data from an array. Instead of manually iterating over the array using a loop, we can simply use the inbuilt Array.map() method. The Array.map() method allows you to iterate over an array and modify its elements using a callback function. The callback function will then be executed on each of the array’s elements. In this case, we will just return a table row on each iteration.\nFilename: App.js\nJavascript\nimport './App.css'; // Example of a data array that// you might receive from an APIconst data = [ { name: \"Anom\", age: 19, gender: \"Male\" }, { name: \"Megha\", age: 19, gender: \"Female\" }, { name: \"Subham\", age: 25, gender: \"Male\"},] function App() { return (
{data.map((val, key) => { return ( ) })}
Name Age Gender
{val.name} {val.age} {val.gender}
);} export default App;\nFilename: App.css Now, let’s edit the file named App.css to style the table.\nCSS\n.App { width: 100%; height: 100vh; display: flex; justify-content: center; align-items: center;} table { border: 2px solid forestgreen; width: 800px; height: 200px;} th { border-bottom: 1px solid black;} td { text-align: center;}\nStep to Run Application: Run the application using the following command from the root directory of the project:\nnpm start\nOutput: Now open your browser and go to http://localhost:3000/, you will see the following output:\nCSS-Properties\nPicked\nReact-Questions\nReactJS\nWeb Technologies\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nHow to redirect to another page in ReactJS ?\nHow to pass data from one component to other component in ReactJS ?\nReactJS useNavigate() Hook\nReactJS defaultProps\nRe-rendering Components in ReactJS\nRemove elements from a JavaScript Array\nInstallation of Node.js on Linux\nConvert a string to an integer in JavaScript\nHow to insert spaces/tabs in text using HTML/CSS?\nDifference between var, let and const keywords in JavaScript"},"parsed":{"kind":"list like","value":[{"code":null,"e":26561,"s":26533,"text":"\n27 Oct, 2021"},{"code":null,"e":26715,"s":26561,"text":"In this article, we will create a simple table in React.js just like you would create in a normal HTML project. Also, we will style it using normal CSS. "},{"code":null,"e":26771,"s":26715,"text":"Prerequisites: The pre-requisites for this project are:"},{"code":null,"e":26777,"s":26771,"text":"React"},{"code":null,"e":26799,"s":26777,"text":"Functional Components"},{"code":null,"e":26814,"s":26799,"text":"JavaScript ES6"},{"code":null,"e":26832,"s":26814,"text":"HTML Tables & CSS"},{"code":null,"e":26862,"s":26832,"text":"Creating a React application:"},{"code":null,"e":26946,"s":26862,"text":"Step 1: Create a react application by typing the following command in the terminal."},{"code":null,"e":26979,"s":26946,"text":"npx create-react-app react-table"},{"code":null,"e":27067,"s":26979,"text":"Step 2: Now, go to the project folder i.e react-table by running the following command."},{"code":null,"e":27082,"s":27067,"text":"cd react-table"},{"code":null,"e":27134,"s":27082,"text":"Project Structure: It will look like the following:"},{"code":null,"e":27341,"s":27134,"text":"Example 1: Here App.js is the default component. At first, we will see how to create a table using the hardcoded values. Later we will see how to dynamically render the data from an array inside the table. "},{"code":null,"e":27358,"s":27341,"text":"Filename: App.js"},{"code":null,"e":27369,"s":27358,"text":"Javascript"},{"code":"import './App.css'; function App() { return (
Name Age Gender
Anom 19 Male
Megha 19 Female
Subham 25 Male
);} export default App;","e":27881,"s":27369,"text":null},{"code":null,"e":27998,"s":27881,"text":"In the above example, we just simply used the HTML table elements which are , ,
, and elements. "},{"code":null,"e":28436,"s":27998,"text":"Example 2: Now lets us see how we can dynamically render data from an array. Instead of manually iterating over the array using a loop, we can simply use the inbuilt Array.map() method. The Array.map() method allows you to iterate over an array and modify its elements using a callback function. The callback function will then be executed on each of the array’s elements. In this case, we will just return a table row on each iteration."},{"code":null,"e":28453,"s":28436,"text":"Filename: App.js"},{"code":null,"e":28464,"s":28453,"text":"Javascript"},{"code":"import './App.css'; // Example of a data array that// you might receive from an APIconst data = [ { name: \"Anom\", age: 19, gender: \"Male\" }, { name: \"Megha\", age: 19, gender: \"Female\" }, { name: \"Subham\", age: 25, gender: \"Male\"},] function App() { return (
{data.map((val, key) => { return ( ) })}
Name Age Gender
{val.name} {val.age} {val.gender}
);} export default App;","e":29127,"s":28464,"text":null},{"code":null,"e":29204,"s":29127,"text":"Filename: App.css Now, let’s edit the file named App.css to style the table."},{"code":null,"e":29208,"s":29204,"text":"CSS"},{"code":".App { width: 100%; height: 100vh; display: flex; justify-content: center; align-items: center;} table { border: 2px solid forestgreen; width: 800px; height: 200px;} th { border-bottom: 1px solid black;} td { text-align: center;}","e":29451,"s":29208,"text":null},{"code":null,"e":29564,"s":29451,"text":"Step to Run Application: Run the application using the following command from the root directory of the project:"},{"code":null,"e":29574,"s":29564,"text":"npm start"},{"code":null,"e":29673,"s":29574,"text":"Output: Now open your browser and go to http://localhost:3000/, you will see the following output:"},{"code":null,"e":29688,"s":29673,"text":"CSS-Properties"},{"code":null,"e":29695,"s":29688,"text":"Picked"},{"code":null,"e":29711,"s":29695,"text":"React-Questions"},{"code":null,"e":29719,"s":29711,"text":"ReactJS"},{"code":null,"e":29736,"s":29719,"text":"Web Technologies"},{"code":null,"e":29834,"s":29736,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":29879,"s":29834,"text":"How to redirect to another page in ReactJS ?"},{"code":null,"e":29947,"s":29879,"text":"How to pass data from one component to other component in ReactJS ?"},{"code":null,"e":29974,"s":29947,"text":"ReactJS useNavigate() Hook"},{"code":null,"e":29995,"s":29974,"text":"ReactJS defaultProps"},{"code":null,"e":30030,"s":29995,"text":"Re-rendering Components in ReactJS"},{"code":null,"e":30070,"s":30030,"text":"Remove elements from a JavaScript Array"},{"code":null,"e":30103,"s":30070,"text":"Installation of Node.js on Linux"},{"code":null,"e":30148,"s":30103,"text":"Convert a string to an integer in JavaScript"},{"code":null,"e":30198,"s":30148,"text":"How to insert spaces/tabs in text using HTML/CSS?"}],"string":"[\n {\n \"code\": null,\n \"e\": 26561,\n \"s\": 26533,\n \"text\": \"\\n27 Oct, 2021\"\n },\n {\n \"code\": null,\n \"e\": 26715,\n \"s\": 26561,\n \"text\": \"In this article, we will create a simple table in React.js just like you would create in a normal HTML project. Also, we will style it using normal CSS. \"\n },\n {\n \"code\": null,\n \"e\": 26771,\n \"s\": 26715,\n \"text\": \"Prerequisites: The pre-requisites for this project are:\"\n },\n {\n \"code\": null,\n \"e\": 26777,\n \"s\": 26771,\n \"text\": \"React\"\n },\n {\n \"code\": null,\n \"e\": 26799,\n \"s\": 26777,\n \"text\": \"Functional Components\"\n },\n {\n \"code\": null,\n \"e\": 26814,\n \"s\": 26799,\n \"text\": \"JavaScript ES6\"\n },\n {\n \"code\": null,\n \"e\": 26832,\n \"s\": 26814,\n \"text\": \"HTML Tables & CSS\"\n },\n {\n \"code\": null,\n \"e\": 26862,\n \"s\": 26832,\n \"text\": \"Creating a React application:\"\n },\n {\n \"code\": null,\n \"e\": 26946,\n \"s\": 26862,\n \"text\": \"Step 1: Create a react application by typing the following command in the terminal.\"\n },\n {\n \"code\": null,\n \"e\": 26979,\n \"s\": 26946,\n \"text\": \"npx create-react-app react-table\"\n },\n {\n \"code\": null,\n \"e\": 27067,\n \"s\": 26979,\n \"text\": \"Step 2: Now, go to the project folder i.e react-table by running the following command.\"\n },\n {\n \"code\": null,\n \"e\": 27082,\n \"s\": 27067,\n \"text\": \"cd react-table\"\n },\n {\n \"code\": null,\n \"e\": 27134,\n \"s\": 27082,\n \"text\": \"Project Structure: It will look like the following:\"\n },\n {\n \"code\": null,\n \"e\": 27341,\n \"s\": 27134,\n \"text\": \"Example 1: Here App.js is the default component. At first, we will see how to create a table using the hardcoded values. Later we will see how to dynamically render the data from an array inside the table. \"\n },\n {\n \"code\": null,\n \"e\": 27358,\n \"s\": 27341,\n \"text\": \"Filename: App.js\"\n },\n {\n \"code\": null,\n \"e\": 27369,\n \"s\": 27358,\n \"text\": \"Javascript\"\n },\n {\n \"code\": \"import './App.css'; function App() { return (
Name Age Gender
Anom 19 Male
Megha 19 Female
Subham 25 Male
);} export default App;\",\n \"e\": 27881,\n \"s\": 27369,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27998,\n \"s\": 27881,\n \"text\": \"In the above example, we just simply used the HTML table elements which are , ,
, and elements. \"\n },\n {\n \"code\": null,\n \"e\": 28436,\n \"s\": 27998,\n \"text\": \"Example 2: Now lets us see how we can dynamically render data from an array. Instead of manually iterating over the array using a loop, we can simply use the inbuilt Array.map() method. The Array.map() method allows you to iterate over an array and modify its elements using a callback function. The callback function will then be executed on each of the array’s elements. In this case, we will just return a table row on each iteration.\"\n },\n {\n \"code\": null,\n \"e\": 28453,\n \"s\": 28436,\n \"text\": \"Filename: App.js\"\n },\n {\n \"code\": null,\n \"e\": 28464,\n \"s\": 28453,\n \"text\": \"Javascript\"\n },\n {\n \"code\": \"import './App.css'; // Example of a data array that// you might receive from an APIconst data = [ { name: \\\"Anom\\\", age: 19, gender: \\\"Male\\\" }, { name: \\\"Megha\\\", age: 19, gender: \\\"Female\\\" }, { name: \\\"Subham\\\", age: 25, gender: \\\"Male\\\"},] function App() { return (
{data.map((val, key) => { return ( ) })}
Name Age Gender
{val.name} {val.age} {val.gender}
);} export default App;\",\n \"e\": 29127,\n \"s\": 28464,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 29204,\n \"s\": 29127,\n \"text\": \"Filename: App.css Now, let’s edit the file named App.css to style the table.\"\n },\n {\n \"code\": null,\n \"e\": 29208,\n \"s\": 29204,\n \"text\": \"CSS\"\n },\n {\n \"code\": \".App { width: 100%; height: 100vh; display: flex; justify-content: center; align-items: center;} table { border: 2px solid forestgreen; width: 800px; height: 200px;} th { border-bottom: 1px solid black;} td { text-align: center;}\",\n \"e\": 29451,\n \"s\": 29208,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 29564,\n \"s\": 29451,\n \"text\": \"Step to Run Application: Run the application using the following command from the root directory of the project:\"\n },\n {\n \"code\": null,\n \"e\": 29574,\n \"s\": 29564,\n \"text\": \"npm start\"\n },\n {\n \"code\": null,\n \"e\": 29673,\n \"s\": 29574,\n \"text\": \"Output: Now open your browser and go to http://localhost:3000/, you will see the following output:\"\n },\n {\n \"code\": null,\n \"e\": 29688,\n \"s\": 29673,\n \"text\": \"CSS-Properties\"\n },\n {\n \"code\": null,\n \"e\": 29695,\n \"s\": 29688,\n \"text\": \"Picked\"\n },\n {\n \"code\": null,\n \"e\": 29711,\n \"s\": 29695,\n \"text\": \"React-Questions\"\n },\n {\n \"code\": null,\n \"e\": 29719,\n \"s\": 29711,\n \"text\": \"ReactJS\"\n },\n {\n \"code\": null,\n \"e\": 29736,\n \"s\": 29719,\n \"text\": \"Web Technologies\"\n },\n {\n \"code\": null,\n \"e\": 29834,\n \"s\": 29736,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 29879,\n \"s\": 29834,\n \"text\": \"How to redirect to another page in ReactJS ?\"\n },\n {\n \"code\": null,\n \"e\": 29947,\n \"s\": 29879,\n \"text\": \"How to pass data from one component to other component in ReactJS ?\"\n },\n {\n \"code\": null,\n \"e\": 29974,\n \"s\": 29947,\n \"text\": \"ReactJS useNavigate() Hook\"\n },\n {\n \"code\": null,\n \"e\": 29995,\n \"s\": 29974,\n \"text\": \"ReactJS defaultProps\"\n },\n {\n \"code\": null,\n \"e\": 30030,\n \"s\": 29995,\n \"text\": \"Re-rendering Components in ReactJS\"\n },\n {\n \"code\": null,\n \"e\": 30070,\n \"s\": 30030,\n \"text\": \"Remove elements from a JavaScript Array\"\n },\n {\n \"code\": null,\n \"e\": 30103,\n \"s\": 30070,\n \"text\": \"Installation of Node.js on Linux\"\n },\n {\n \"code\": null,\n \"e\": 30148,\n \"s\": 30103,\n \"text\": \"Convert a string to an integer in JavaScript\"\n },\n {\n \"code\": null,\n \"e\": 30198,\n \"s\": 30148,\n \"text\": \"How to insert spaces/tabs in text using HTML/CSS?\"\n }\n]"}}},{"rowIdx":538,"cells":{"title":{"kind":"string","value":"Appending to list in Python dictionary - GeeksforGeeks"},"text":{"kind":"string","value":"17 Oct, 2021\nIn this article, we are going to see how to append to a list in a Python dictionary.\nIn this method, we will use the += operator to append a list into the dictionary, for this we will take a dictionary and then add elements as a list into the dictionary.\nPython3\nDetails = {\"Destination\": \"China\", \"Nstionality\": \"Italian\", \"Age\": []}Details[\"Age\"] += [20, \"Twenty\"]print(Details)\nOutput:\n{'Destination': 'China', 'Nstionality': 'Italian', 'Age': [20, 'Twenty']}\nYou can as well append one item.\nIn this method, we will use conditions for checking the key and then append the list into the dictionary.\nPython3\nDetails = {}Details[\"Age\"] = [20]print(Details) if \"Age\" in Details: Details[\"Age\"].append(\"Twenty\") print(Details)\nOutput:\n{'Age': [20]}\n{'Age': [20, 'Twenty']}\nIn this method, we are using defaultdict() function, It is a part of the collections module. You have to import the function from the collections module to use it in your program. and then use to append the list into the dictionary.\nPython3\nfrom collections import defaultdict Details = defaultdict(list)Details[\"Country\"].append(\"India\")print(Details)\nOutput:\ndefaultdict(, {'Country': ['India']})\nSince append takes only one parameter, to insert another parameter, repeat the append method.\nPython3\nfrom collections import defaultdict Details = defaultdict(list)Details[\"Country\"].append(\"India\")Details[\"Country\"].append(\"Pakistan\")print(Details)\nOutput:\ndefaultdict(, {'Country': ['India', 'Pakistan']})\nWe will use the update function to add a new list into the dictionary. You can use the update() function to embed a dictionary inside another dictionary. \nPython3\nDetails = {}Details[\"Age\"] = []Details.update({\"Age\": [18, 20, 25, 29, 30]})print(Details)\nOutput:\n{'Age': [18, 20, 25, 29, 30]}\nYou can convert a list into a value for a key in a python dictionary using dict() function.\nPython3\nValues = [18, 20, 25, 29, 30]Details = dict({\"Age\": Values})print(Details)\nOutput:\n{'Age': [18, 20, 25, 29, 30]}\nPicked\nPython dictionary-programs\npython-dict\nPython\nPython Programs\npython-dict\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nRead a file line by line in Python\nHow to Install PIP on Windows ?\nEnumerate() in Python\nDifferent ways to create Pandas Dataframe\nIterate over a list in Python\nPython program to convert a list to string\nDefaultdict in Python\nPython | Get dictionary keys as a list\nPython | Split string into list of characters\nPython | Convert a list to dictionary"},"parsed":{"kind":"list like","value":[{"code":null,"e":25555,"s":25527,"text":"\n17 Oct, 2021"},{"code":null,"e":25640,"s":25555,"text":"In this article, we are going to see how to append to a list in a Python dictionary."},{"code":null,"e":25810,"s":25640,"text":"In this method, we will use the += operator to append a list into the dictionary, for this we will take a dictionary and then add elements as a list into the dictionary."},{"code":null,"e":25818,"s":25810,"text":"Python3"},{"code":"Details = {\"Destination\": \"China\", \"Nstionality\": \"Italian\", \"Age\": []}Details[\"Age\"] += [20, \"Twenty\"]print(Details)","e":25947,"s":25818,"text":null},{"code":null,"e":25955,"s":25947,"text":"Output:"},{"code":null,"e":26029,"s":25955,"text":"{'Destination': 'China', 'Nstionality': 'Italian', 'Age': [20, 'Twenty']}"},{"code":null,"e":26062,"s":26029,"text":"You can as well append one item."},{"code":null,"e":26168,"s":26062,"text":"In this method, we will use conditions for checking the key and then append the list into the dictionary."},{"code":null,"e":26176,"s":26168,"text":"Python3"},{"code":"Details = {}Details[\"Age\"] = [20]print(Details) if \"Age\" in Details: Details[\"Age\"].append(\"Twenty\") print(Details)","e":26299,"s":26176,"text":null},{"code":null,"e":26307,"s":26299,"text":"Output:"},{"code":null,"e":26345,"s":26307,"text":"{'Age': [20]}\n{'Age': [20, 'Twenty']}"},{"code":null,"e":26578,"s":26345,"text":"In this method, we are using defaultdict() function, It is a part of the collections module. You have to import the function from the collections module to use it in your program. and then use to append the list into the dictionary."},{"code":null,"e":26586,"s":26578,"text":"Python3"},{"code":"from collections import defaultdict Details = defaultdict(list)Details[\"Country\"].append(\"India\")print(Details)","e":26699,"s":26586,"text":null},{"code":null,"e":26707,"s":26699,"text":"Output:"},{"code":null,"e":26759,"s":26707,"text":"defaultdict(, {'Country': ['India']})"},{"code":null,"e":26853,"s":26759,"text":"Since append takes only one parameter, to insert another parameter, repeat the append method."},{"code":null,"e":26861,"s":26853,"text":"Python3"},{"code":"from collections import defaultdict Details = defaultdict(list)Details[\"Country\"].append(\"India\")Details[\"Country\"].append(\"Pakistan\")print(Details)","e":27011,"s":26861,"text":null},{"code":null,"e":27019,"s":27011,"text":"Output:"},{"code":null,"e":27083,"s":27019,"text":"defaultdict(, {'Country': ['India', 'Pakistan']})"},{"code":null,"e":27238,"s":27083,"text":"We will use the update function to add a new list into the dictionary. You can use the update() function to embed a dictionary inside another dictionary. "},{"code":null,"e":27246,"s":27238,"text":"Python3"},{"code":"Details = {}Details[\"Age\"] = []Details.update({\"Age\": [18, 20, 25, 29, 30]})print(Details)","e":27337,"s":27246,"text":null},{"code":null,"e":27345,"s":27337,"text":"Output:"},{"code":null,"e":27375,"s":27345,"text":"{'Age': [18, 20, 25, 29, 30]}"},{"code":null,"e":27467,"s":27375,"text":"You can convert a list into a value for a key in a python dictionary using dict() function."},{"code":null,"e":27475,"s":27467,"text":"Python3"},{"code":"Values = [18, 20, 25, 29, 30]Details = dict({\"Age\": Values})print(Details)","e":27550,"s":27475,"text":null},{"code":null,"e":27558,"s":27550,"text":"Output:"},{"code":null,"e":27588,"s":27558,"text":"{'Age': [18, 20, 25, 29, 30]}"},{"code":null,"e":27595,"s":27588,"text":"Picked"},{"code":null,"e":27622,"s":27595,"text":"Python dictionary-programs"},{"code":null,"e":27634,"s":27622,"text":"python-dict"},{"code":null,"e":27641,"s":27634,"text":"Python"},{"code":null,"e":27657,"s":27641,"text":"Python Programs"},{"code":null,"e":27669,"s":27657,"text":"python-dict"},{"code":null,"e":27767,"s":27669,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":27802,"s":27767,"text":"Read a file line by line in Python"},{"code":null,"e":27834,"s":27802,"text":"How to Install PIP on Windows ?"},{"code":null,"e":27856,"s":27834,"text":"Enumerate() in Python"},{"code":null,"e":27898,"s":27856,"text":"Different ways to create Pandas Dataframe"},{"code":null,"e":27928,"s":27898,"text":"Iterate over a list in Python"},{"code":null,"e":27971,"s":27928,"text":"Python program to convert a list to string"},{"code":null,"e":27993,"s":27971,"text":"Defaultdict in Python"},{"code":null,"e":28032,"s":27993,"text":"Python | Get dictionary keys as a list"},{"code":null,"e":28078,"s":28032,"text":"Python | Split string into list of characters"}],"string":"[\n {\n \"code\": null,\n \"e\": 25555,\n \"s\": 25527,\n \"text\": \"\\n17 Oct, 2021\"\n },\n {\n \"code\": null,\n \"e\": 25640,\n \"s\": 25555,\n \"text\": \"In this article, we are going to see how to append to a list in a Python dictionary.\"\n },\n {\n \"code\": null,\n \"e\": 25810,\n \"s\": 25640,\n \"text\": \"In this method, we will use the += operator to append a list into the dictionary, for this we will take a dictionary and then add elements as a list into the dictionary.\"\n },\n {\n \"code\": null,\n \"e\": 25818,\n \"s\": 25810,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"Details = {\\\"Destination\\\": \\\"China\\\", \\\"Nstionality\\\": \\\"Italian\\\", \\\"Age\\\": []}Details[\\\"Age\\\"] += [20, \\\"Twenty\\\"]print(Details)\",\n \"e\": 25947,\n \"s\": 25818,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25955,\n \"s\": 25947,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 26029,\n \"s\": 25955,\n \"text\": \"{'Destination': 'China', 'Nstionality': 'Italian', 'Age': [20, 'Twenty']}\"\n },\n {\n \"code\": null,\n \"e\": 26062,\n \"s\": 26029,\n \"text\": \"You can as well append one item.\"\n },\n {\n \"code\": null,\n \"e\": 26168,\n \"s\": 26062,\n \"text\": \"In this method, we will use conditions for checking the key and then append the list into the dictionary.\"\n },\n {\n \"code\": null,\n \"e\": 26176,\n \"s\": 26168,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"Details = {}Details[\\\"Age\\\"] = [20]print(Details) if \\\"Age\\\" in Details: Details[\\\"Age\\\"].append(\\\"Twenty\\\") print(Details)\",\n \"e\": 26299,\n \"s\": 26176,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26307,\n \"s\": 26299,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 26345,\n \"s\": 26307,\n \"text\": \"{'Age': [20]}\\n{'Age': [20, 'Twenty']}\"\n },\n {\n \"code\": null,\n \"e\": 26578,\n \"s\": 26345,\n \"text\": \"In this method, we are using defaultdict() function, It is a part of the collections module. You have to import the function from the collections module to use it in your program. and then use to append the list into the dictionary.\"\n },\n {\n \"code\": null,\n \"e\": 26586,\n \"s\": 26578,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"from collections import defaultdict Details = defaultdict(list)Details[\\\"Country\\\"].append(\\\"India\\\")print(Details)\",\n \"e\": 26699,\n \"s\": 26586,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26707,\n \"s\": 26699,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 26759,\n \"s\": 26707,\n \"text\": \"defaultdict(, {'Country': ['India']})\"\n },\n {\n \"code\": null,\n \"e\": 26853,\n \"s\": 26759,\n \"text\": \"Since append takes only one parameter, to insert another parameter, repeat the append method.\"\n },\n {\n \"code\": null,\n \"e\": 26861,\n \"s\": 26853,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"from collections import defaultdict Details = defaultdict(list)Details[\\\"Country\\\"].append(\\\"India\\\")Details[\\\"Country\\\"].append(\\\"Pakistan\\\")print(Details)\",\n \"e\": 27011,\n \"s\": 26861,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27019,\n \"s\": 27011,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 27083,\n \"s\": 27019,\n \"text\": \"defaultdict(, {'Country': ['India', 'Pakistan']})\"\n },\n {\n \"code\": null,\n \"e\": 27238,\n \"s\": 27083,\n \"text\": \"We will use the update function to add a new list into the dictionary. You can use the update() function to embed a dictionary inside another dictionary. \"\n },\n {\n \"code\": null,\n \"e\": 27246,\n \"s\": 27238,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"Details = {}Details[\\\"Age\\\"] = []Details.update({\\\"Age\\\": [18, 20, 25, 29, 30]})print(Details)\",\n \"e\": 27337,\n \"s\": 27246,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27345,\n \"s\": 27337,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 27375,\n \"s\": 27345,\n \"text\": \"{'Age': [18, 20, 25, 29, 30]}\"\n },\n {\n \"code\": null,\n \"e\": 27467,\n \"s\": 27375,\n \"text\": \"You can convert a list into a value for a key in a python dictionary using dict() function.\"\n },\n {\n \"code\": null,\n \"e\": 27475,\n \"s\": 27467,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"Values = [18, 20, 25, 29, 30]Details = dict({\\\"Age\\\": Values})print(Details)\",\n \"e\": 27550,\n \"s\": 27475,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27558,\n \"s\": 27550,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 27588,\n \"s\": 27558,\n \"text\": \"{'Age': [18, 20, 25, 29, 30]}\"\n },\n {\n \"code\": null,\n \"e\": 27595,\n \"s\": 27588,\n \"text\": \"Picked\"\n },\n {\n \"code\": null,\n \"e\": 27622,\n \"s\": 27595,\n \"text\": \"Python dictionary-programs\"\n },\n {\n \"code\": null,\n \"e\": 27634,\n \"s\": 27622,\n \"text\": \"python-dict\"\n },\n {\n \"code\": null,\n \"e\": 27641,\n \"s\": 27634,\n \"text\": \"Python\"\n },\n {\n \"code\": null,\n \"e\": 27657,\n \"s\": 27641,\n \"text\": \"Python Programs\"\n },\n {\n \"code\": null,\n \"e\": 27669,\n \"s\": 27657,\n \"text\": \"python-dict\"\n },\n {\n \"code\": null,\n \"e\": 27767,\n \"s\": 27669,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 27802,\n \"s\": 27767,\n \"text\": \"Read a file line by line in Python\"\n },\n {\n \"code\": null,\n \"e\": 27834,\n \"s\": 27802,\n \"text\": \"How to Install PIP on Windows ?\"\n },\n {\n \"code\": null,\n \"e\": 27856,\n \"s\": 27834,\n \"text\": \"Enumerate() in Python\"\n },\n {\n \"code\": null,\n \"e\": 27898,\n \"s\": 27856,\n \"text\": \"Different ways to create Pandas Dataframe\"\n },\n {\n \"code\": null,\n \"e\": 27928,\n \"s\": 27898,\n \"text\": \"Iterate over a list in Python\"\n },\n {\n \"code\": null,\n \"e\": 27971,\n \"s\": 27928,\n \"text\": \"Python program to convert a list to string\"\n },\n {\n \"code\": null,\n \"e\": 27993,\n \"s\": 27971,\n \"text\": \"Defaultdict in Python\"\n },\n {\n \"code\": null,\n \"e\": 28032,\n \"s\": 27993,\n \"text\": \"Python | Get dictionary keys as a list\"\n },\n {\n \"code\": null,\n \"e\": 28078,\n \"s\": 28032,\n \"text\": \"Python | Split string into list of characters\"\n }\n]"}}},{"rowIdx":539,"cells":{"title":{"kind":"string","value":"Homomorphism & Isomorphism of Group - GeeksforGeeks"},"text":{"kind":"string","value":"26 May, 2021\nIntroduction :We can say that “o” is the binary operation on set G if : G is an non-empty set & G * G = { (a,b) : a , b∈ G } and o : G * G –> G. Here, aob denotes the image of ordered pair (a,b) under the function / operation o.Example – “+” is called a binary operation on G (any non-empty set ) if & only if : a+b ∈G ; ∀ a,b ∈G and a+b give the same result every time when added.Real example – ‘+’ is a binary operation on the set of natural numbers ‘N’ because a+b ∈ N ; ∀ a,b ∈N and a+b a+b give the same result every time when added. \nLaws of Binary Operation :In a binary operation o, such that : o : G * G –> G on the set G is :1. Commutative –\n aob = boa ; ∀ a,b ∈G\nExample : ‘+’ is a binary operation on the set of natural numbers ‘N’. Taking any 2 random natural numbers , say 6 & 70, so here a = 6 & b = 70, a+b = 6 + 70 = 76 = 70 + 6 = b + aThis is true for all the numbers that come under the natural number.\n2. Associative –\nao(boc) = (aob)oc ; ∀ a,b,c ∈G\nExample : ‘+’ is a binary operation on the set of natural numbers ‘N’. Taking any 3 random natural numbers , say 2 , 3 & 7, so here a = 2 & b = 3 and c = 7,LHS : a+(b+c) = 2 +( 3 +7) = 2 + 10 = 12RHS : (a+b)+c = (2 + 3) + 7 = 5 + 7 = 12This is true for all the numbers that come under the natural number.\n3. Left Distributive – \nao(b*c) = (aob) * (aoc) ; ∀ a,b,c ∈G\n4. Right Distributive –\n (b*c) oa = (boa) * (coa) ; ∀ a,b,c ∈G\n5. Left Cancellation –\n aob =aoc => b = c ; ∀ a,b,c ∈G\n6. Right Cancellation –\n boa = coa => b = c ; ∀ a,b,c ∈G\nAlgebraic Structure :A non-empty set G equipped with 1/more binary operations is called an algebraic structure. Example : a. (N,+) and b. (R, + , .), where N is a set of natural numbers & R is a set of real numbers. Here ‘ . ‘ (dot) specifies a multiplication operation. \nGROUP : An algebraic structure (G , o) where G is a non-empty set & ‘o’ is a binary operation defined on G is called a Group if the binary operation “o” satisfies the following properties –\n1. Closure – \na ∈ G ,b ∈ G => aob ∈ G ; ∀ a,b ∈ G\n2. Associativity –\n (aob)oc = ao(boc) ; ∀ a,b,c ∈ G.\n3. Identity Element – There exists e in G such that aoe = eoa = a ; ∀ a ∈ G (Example – For addition, identity is 0)\n4. Existence of Inverse – For each element a ∈ G ; there exists an inverse(a-1)such that : ∈ G such that – aoa-1 = a-1oa = e\nHomomorphism of groups :Let (G,o) & (G’,o’) be 2 groups, a mapping “f ” from a group (G,o) to a group (G’,o’) is said to be a homomorphism if –\nf(aob) = f(a) o' f(b) ∀ a,b ∈ G\nThe essential point here is : The mapping f : G –> G’ may neither be a one-one nor onto mapping, i.e, ‘f’ needs not to be bijective.\nExample –If (R,+) is a group of all real numbers under the operation ‘+’ & (R -{0},*) is another group of non-zero real numbers under the operation ‘*’ (Multiplication) & f is a mapping from (R,+) to (R -{0},*), defined as : f(a) = 2a ; ∀ a ∈ RThen f is a homomorphism like – f(a+b) = 2a+b = 2a * 2b = f(a).f(b) . So the rule of homomorphism is satisfied & hence f is a homomorphism.\nHomomorphism Into – A mapping ‘f’, that is homomorphism & also Into.\nHomomorphism Onto – A mapping ‘f’, that is homomorphism & also onto.\nIsomorphism of Group :Let (G,o) & (G’,o’) be 2 groups, a mapping “f ” from a group (G,o) to a group (G’,o’) is said to be an isomorphism if –\n1. f(aob) = f(a) o' f(b) ∀ a,b ∈ G\n2. f is a one- one mapping\n3. f is an onto mapping.\nIf ‘f’ is an isomorphic mapping, (G,o) will be isomorphic to the group (G’,o’) & we write :\nG ≅ G'\nNote : A mapping f: X -> Y is called :\nOne – One – If x1 ≠x2, then f(x1) ≠ f(x2) or if f(x1) = f(x2) => x1 = x2. Where x1,x2 ∈ XOnto – If every element in the set Y is the f-image of at least one element of set X. Bijective – If it is one one & Onto.\nOne – One – If x1 ≠x2, then f(x1) ≠ f(x2) or if f(x1) = f(x2) => x1 = x2. Where x1,x2 ∈ X\nOnto – If every element in the set Y is the f-image of at least one element of set X. \nBijective – If it is one one & Onto.\nExample of Isomorphism Group –If G is the multiplicative group of 3 cube-root units , i.e., (G,o) = ( {1, w, w2 } , *) where w3 = 1 & G’ is an additive group of integers modulo 3 – (G’, o’) = ( {1,2,3) , +3). Then : G ≅ G’ , we say G is isomorphic to G’.\nThe structure & order of both the tables are same. The mapping ‘f’ is defined as :f : G -> G’ in such a way that f(1) = 0 , f(w) = 1 & f(w2) = 2.\nHomomorphism property : f(aob) = f(a) o’ f(b) ∀ a,b ∈ G . Let us take a = w & b = 1LHS : f(a * b) = f( w * 1 ) = f(w) = 1.RHS : f(a) +3 f(b) = f(w) +3 f(1) = 1 + 0 = 1=>LHS = RHS\nThis mapping f is one-one & onto also, therefore, a homomorphism.\nEngineering Mathematics\nGATE CS\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nActivation Functions\nDifference between Propositional Logic and Predicate Logic\nLogic Notations in LaTeX\nUnivariate, Bivariate and Multivariate data and its analysis\nZ-test\nLayers of OSI Model\nACID Properties in DBMS\nTCP/IP Model\nTypes of Operating Systems\nNormal Forms in DBMS"},"parsed":{"kind":"list like","value":[{"code":null,"e":26137,"s":26109,"text":"\n26 May, 2021"},{"code":null,"e":26680,"s":26137,"text":"Introduction :We can say that “o” is the binary operation on set G if : G is an non-empty set & G * G = { (a,b) : a , b∈ G } and o : G * G –> G. Here, aob denotes the image of ordered pair (a,b) under the function / operation o.Example – “+” is called a binary operation on G (any non-empty set ) if & only if : a+b ∈G ; ∀ a,b ∈G and a+b give the same result every time when added.Real example – ‘+’ is a binary operation on the set of natural numbers ‘N’ because a+b ∈ N ; ∀ a,b ∈N and a+b a+b give the same result every time when added. "},{"code":null,"e":26792,"s":26680,"text":"Laws of Binary Operation :In a binary operation o, such that : o : G * G –> G on the set G is :1. Commutative –"},{"code":null,"e":26814,"s":26792,"text":" aob = boa ; ∀ a,b ∈G"},{"code":null,"e":27063,"s":26814,"text":"Example : ‘+’ is a binary operation on the set of natural numbers ‘N’. Taking any 2 random natural numbers , say 6 & 70, so here a = 6 & b = 70, a+b = 6 + 70 = 76 = 70 + 6 = b + aThis is true for all the numbers that come under the natural number."},{"code":null,"e":27080,"s":27063,"text":"2. Associative –"},{"code":null,"e":27111,"s":27080,"text":"ao(boc) = (aob)oc ; ∀ a,b,c ∈G"},{"code":null,"e":27418,"s":27111,"text":"Example : ‘+’ is a binary operation on the set of natural numbers ‘N’. Taking any 3 random natural numbers , say 2 , 3 & 7, so here a = 2 & b = 3 and c = 7,LHS : a+(b+c) = 2 +( 3 +7) = 2 + 10 = 12RHS : (a+b)+c = (2 + 3) + 7 = 5 + 7 = 12This is true for all the numbers that come under the natural number."},{"code":null,"e":27442,"s":27418,"text":"3. Left Distributive – "},{"code":null,"e":27479,"s":27442,"text":"ao(b*c) = (aob) * (aoc) ; ∀ a,b,c ∈G"},{"code":null,"e":27503,"s":27479,"text":"4. Right Distributive –"},{"code":null,"e":27543,"s":27503,"text":" (b*c) oa = (boa) * (coa) ; ∀ a,b,c ∈G"},{"code":null,"e":27566,"s":27543,"text":"5. Left Cancellation –"},{"code":null,"e":27600,"s":27566,"text":" aob =aoc => b = c ; ∀ a,b,c ∈G"},{"code":null,"e":27624,"s":27600,"text":"6. Right Cancellation –"},{"code":null,"e":27658,"s":27624,"text":" boa = coa => b = c ; ∀ a,b,c ∈G"},{"code":null,"e":27931,"s":27658,"text":"Algebraic Structure :A non-empty set G equipped with 1/more binary operations is called an algebraic structure. Example : a. (N,+) and b. (R, + , .), where N is a set of natural numbers & R is a set of real numbers. Here ‘ . ‘ (dot) specifies a multiplication operation. "},{"code":null,"e":28121,"s":27931,"text":"GROUP : An algebraic structure (G , o) where G is a non-empty set & ‘o’ is a binary operation defined on G is called a Group if the binary operation “o” satisfies the following properties –"},{"code":null,"e":28135,"s":28121,"text":"1. Closure – "},{"code":null,"e":28173,"s":28135,"text":"a ∈ G ,b ∈ G => aob ∈ G ; ∀ a,b ∈ G"},{"code":null,"e":28192,"s":28173,"text":"2. Associativity –"},{"code":null,"e":28226,"s":28192,"text":" (aob)oc = ao(boc) ; ∀ a,b,c ∈ G."},{"code":null,"e":28342,"s":28226,"text":"3. Identity Element – There exists e in G such that aoe = eoa = a ; ∀ a ∈ G (Example – For addition, identity is 0)"},{"code":null,"e":28468,"s":28342,"text":"4. Existence of Inverse – For each element a ∈ G ; there exists an inverse(a-1)such that : ∈ G such that – aoa-1 = a-1oa = e"},{"code":null,"e":28612,"s":28468,"text":"Homomorphism of groups :Let (G,o) & (G’,o’) be 2 groups, a mapping “f ” from a group (G,o) to a group (G’,o’) is said to be a homomorphism if –"},{"code":null,"e":28644,"s":28612,"text":"f(aob) = f(a) o' f(b) ∀ a,b ∈ G"},{"code":null,"e":28777,"s":28644,"text":"The essential point here is : The mapping f : G –> G’ may neither be a one-one nor onto mapping, i.e, ‘f’ needs not to be bijective."},{"code":null,"e":29165,"s":28777,"text":"Example –If (R,+) is a group of all real numbers under the operation ‘+’ & (R -{0},*) is another group of non-zero real numbers under the operation ‘*’ (Multiplication) & f is a mapping from (R,+) to (R -{0},*), defined as : f(a) = 2a ; ∀ a ∈ RThen f is a homomorphism like – f(a+b) = 2a+b = 2a * 2b = f(a).f(b) . So the rule of homomorphism is satisfied & hence f is a homomorphism."},{"code":null,"e":29234,"s":29165,"text":"Homomorphism Into – A mapping ‘f’, that is homomorphism & also Into."},{"code":null,"e":29303,"s":29234,"text":"Homomorphism Onto – A mapping ‘f’, that is homomorphism & also onto."},{"code":null,"e":29445,"s":29303,"text":"Isomorphism of Group :Let (G,o) & (G’,o’) be 2 groups, a mapping “f ” from a group (G,o) to a group (G’,o’) is said to be an isomorphism if –"},{"code":null,"e":29532,"s":29445,"text":"1. f(aob) = f(a) o' f(b) ∀ a,b ∈ G\n2. f is a one- one mapping\n3. f is an onto mapping."},{"code":null,"e":29624,"s":29532,"text":"If ‘f’ is an isomorphic mapping, (G,o) will be isomorphic to the group (G’,o’) & we write :"},{"code":null,"e":29631,"s":29624,"text":"G ≅ G'"},{"code":null,"e":29670,"s":29631,"text":"Note : A mapping f: X -> Y is called :"},{"code":null,"e":29900,"s":29670,"text":"One – One – If x1 ≠x2, then f(x1) ≠ f(x2) or if f(x1) = f(x2) => x1 = x2. Where x1,x2 ∈ XOnto – If every element in the set Y is the f-image of at least one element of set X. Bijective – If it is one one & Onto."},{"code":null,"e":29993,"s":29900,"text":"One – One – If x1 ≠x2, then f(x1) ≠ f(x2) or if f(x1) = f(x2) => x1 = x2. Where x1,x2 ∈ X"},{"code":null,"e":30090,"s":29993,"text":"Onto – If every element in the set Y is the f-image of at least one element of set X. "},{"code":null,"e":30132,"s":30090,"text":"Bijective – If it is one one & Onto."},{"code":null,"e":30389,"s":30132,"text":"Example of Isomorphism Group –If G is the multiplicative group of 3 cube-root units , i.e., (G,o) = ( {1, w, w2 } , *) where w3 = 1 & G’ is an additive group of integers modulo 3 – (G’, o’) = ( {1,2,3) , +3). Then : G ≅ G’ , we say G is isomorphic to G’."},{"code":null,"e":30536,"s":30389,"text":"The structure & order of both the tables are same. The mapping ‘f’ is defined as :f : G -> G’ in such a way that f(1) = 0 , f(w) = 1 & f(w2) = 2."},{"code":null,"e":30715,"s":30536,"text":"Homomorphism property : f(aob) = f(a) o’ f(b) ∀ a,b ∈ G . Let us take a = w & b = 1LHS : f(a * b) = f( w * 1 ) = f(w) = 1.RHS : f(a) +3 f(b) = f(w) +3 f(1) = 1 + 0 = 1=>LHS = RHS"},{"code":null,"e":30781,"s":30715,"text":"This mapping f is one-one & onto also, therefore, a homomorphism."},{"code":null,"e":30805,"s":30781,"text":"Engineering Mathematics"},{"code":null,"e":30813,"s":30805,"text":"GATE CS"},{"code":null,"e":30911,"s":30813,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":30932,"s":30911,"text":"Activation Functions"},{"code":null,"e":30991,"s":30932,"text":"Difference between Propositional Logic and Predicate Logic"},{"code":null,"e":31016,"s":30991,"text":"Logic Notations in LaTeX"},{"code":null,"e":31077,"s":31016,"text":"Univariate, Bivariate and Multivariate data and its analysis"},{"code":null,"e":31084,"s":31077,"text":"Z-test"},{"code":null,"e":31104,"s":31084,"text":"Layers of OSI Model"},{"code":null,"e":31128,"s":31104,"text":"ACID Properties in DBMS"},{"code":null,"e":31141,"s":31128,"text":"TCP/IP Model"},{"code":null,"e":31168,"s":31141,"text":"Types of Operating Systems"}],"string":"[\n {\n \"code\": null,\n \"e\": 26137,\n \"s\": 26109,\n \"text\": \"\\n26 May, 2021\"\n },\n {\n \"code\": null,\n \"e\": 26680,\n \"s\": 26137,\n \"text\": \"Introduction :We can say that “o” is the binary operation on set G if : G is an non-empty set & G * G = { (a,b) : a , b∈ G } and o : G * G –> G. Here, aob denotes the image of ordered pair (a,b) under the function / operation o.Example – “+” is called a binary operation on G (any non-empty set ) if & only if : a+b ∈G ; ∀ a,b ∈G and a+b give the same result every time when added.Real example – ‘+’ is a binary operation on the set of natural numbers ‘N’ because a+b ∈ N ; ∀ a,b ∈N and a+b a+b give the same result every time when added. \"\n },\n {\n \"code\": null,\n \"e\": 26792,\n \"s\": 26680,\n \"text\": \"Laws of Binary Operation :In a binary operation o, such that : o : G * G –> G on the set G is :1. Commutative –\"\n },\n {\n \"code\": null,\n \"e\": 26814,\n \"s\": 26792,\n \"text\": \" aob = boa ; ∀ a,b ∈G\"\n },\n {\n \"code\": null,\n \"e\": 27063,\n \"s\": 26814,\n \"text\": \"Example : ‘+’ is a binary operation on the set of natural numbers ‘N’. Taking any 2 random natural numbers , say 6 & 70, so here a = 6 & b = 70, a+b = 6 + 70 = 76 = 70 + 6 = b + aThis is true for all the numbers that come under the natural number.\"\n },\n {\n \"code\": null,\n \"e\": 27080,\n \"s\": 27063,\n \"text\": \"2. Associative –\"\n },\n {\n \"code\": null,\n \"e\": 27111,\n \"s\": 27080,\n \"text\": \"ao(boc) = (aob)oc ; ∀ a,b,c ∈G\"\n },\n {\n \"code\": null,\n \"e\": 27418,\n \"s\": 27111,\n \"text\": \"Example : ‘+’ is a binary operation on the set of natural numbers ‘N’. Taking any 3 random natural numbers , say 2 , 3 & 7, so here a = 2 & b = 3 and c = 7,LHS : a+(b+c) = 2 +( 3 +7) = 2 + 10 = 12RHS : (a+b)+c = (2 + 3) + 7 = 5 + 7 = 12This is true for all the numbers that come under the natural number.\"\n },\n {\n \"code\": null,\n \"e\": 27442,\n \"s\": 27418,\n \"text\": \"3. Left Distributive – \"\n },\n {\n \"code\": null,\n \"e\": 27479,\n \"s\": 27442,\n \"text\": \"ao(b*c) = (aob) * (aoc) ; ∀ a,b,c ∈G\"\n },\n {\n \"code\": null,\n \"e\": 27503,\n \"s\": 27479,\n \"text\": \"4. Right Distributive –\"\n },\n {\n \"code\": null,\n \"e\": 27543,\n \"s\": 27503,\n \"text\": \" (b*c) oa = (boa) * (coa) ; ∀ a,b,c ∈G\"\n },\n {\n \"code\": null,\n \"e\": 27566,\n \"s\": 27543,\n \"text\": \"5. Left Cancellation –\"\n },\n {\n \"code\": null,\n \"e\": 27600,\n \"s\": 27566,\n \"text\": \" aob =aoc => b = c ; ∀ a,b,c ∈G\"\n },\n {\n \"code\": null,\n \"e\": 27624,\n \"s\": 27600,\n \"text\": \"6. Right Cancellation –\"\n },\n {\n \"code\": null,\n \"e\": 27658,\n \"s\": 27624,\n \"text\": \" boa = coa => b = c ; ∀ a,b,c ∈G\"\n },\n {\n \"code\": null,\n \"e\": 27931,\n \"s\": 27658,\n \"text\": \"Algebraic Structure :A non-empty set G equipped with 1/more binary operations is called an algebraic structure. Example : a. (N,+) and b. (R, + , .), where N is a set of natural numbers & R is a set of real numbers. Here ‘ . ‘ (dot) specifies a multiplication operation. \"\n },\n {\n \"code\": null,\n \"e\": 28121,\n \"s\": 27931,\n \"text\": \"GROUP : An algebraic structure (G , o) where G is a non-empty set & ‘o’ is a binary operation defined on G is called a Group if the binary operation “o” satisfies the following properties –\"\n },\n {\n \"code\": null,\n \"e\": 28135,\n \"s\": 28121,\n \"text\": \"1. Closure – \"\n },\n {\n \"code\": null,\n \"e\": 28173,\n \"s\": 28135,\n \"text\": \"a ∈ G ,b ∈ G => aob ∈ G ; ∀ a,b ∈ G\"\n },\n {\n \"code\": null,\n \"e\": 28192,\n \"s\": 28173,\n \"text\": \"2. Associativity –\"\n },\n {\n \"code\": null,\n \"e\": 28226,\n \"s\": 28192,\n \"text\": \" (aob)oc = ao(boc) ; ∀ a,b,c ∈ G.\"\n },\n {\n \"code\": null,\n \"e\": 28342,\n \"s\": 28226,\n \"text\": \"3. Identity Element – There exists e in G such that aoe = eoa = a ; ∀ a ∈ G (Example – For addition, identity is 0)\"\n },\n {\n \"code\": null,\n \"e\": 28468,\n \"s\": 28342,\n \"text\": \"4. Existence of Inverse – For each element a ∈ G ; there exists an inverse(a-1)such that : ∈ G such that – aoa-1 = a-1oa = e\"\n },\n {\n \"code\": null,\n \"e\": 28612,\n \"s\": 28468,\n \"text\": \"Homomorphism of groups :Let (G,o) & (G’,o’) be 2 groups, a mapping “f ” from a group (G,o) to a group (G’,o’) is said to be a homomorphism if –\"\n },\n {\n \"code\": null,\n \"e\": 28644,\n \"s\": 28612,\n \"text\": \"f(aob) = f(a) o' f(b) ∀ a,b ∈ G\"\n },\n {\n \"code\": null,\n \"e\": 28777,\n \"s\": 28644,\n \"text\": \"The essential point here is : The mapping f : G –> G’ may neither be a one-one nor onto mapping, i.e, ‘f’ needs not to be bijective.\"\n },\n {\n \"code\": null,\n \"e\": 29165,\n \"s\": 28777,\n \"text\": \"Example –If (R,+) is a group of all real numbers under the operation ‘+’ & (R -{0},*) is another group of non-zero real numbers under the operation ‘*’ (Multiplication) & f is a mapping from (R,+) to (R -{0},*), defined as : f(a) = 2a ; ∀ a ∈ RThen f is a homomorphism like – f(a+b) = 2a+b = 2a * 2b = f(a).f(b) . So the rule of homomorphism is satisfied & hence f is a homomorphism.\"\n },\n {\n \"code\": null,\n \"e\": 29234,\n \"s\": 29165,\n \"text\": \"Homomorphism Into – A mapping ‘f’, that is homomorphism & also Into.\"\n },\n {\n \"code\": null,\n \"e\": 29303,\n \"s\": 29234,\n \"text\": \"Homomorphism Onto – A mapping ‘f’, that is homomorphism & also onto.\"\n },\n {\n \"code\": null,\n \"e\": 29445,\n \"s\": 29303,\n \"text\": \"Isomorphism of Group :Let (G,o) & (G’,o’) be 2 groups, a mapping “f ” from a group (G,o) to a group (G’,o’) is said to be an isomorphism if –\"\n },\n {\n \"code\": null,\n \"e\": 29532,\n \"s\": 29445,\n \"text\": \"1. f(aob) = f(a) o' f(b) ∀ a,b ∈ G\\n2. f is a one- one mapping\\n3. f is an onto mapping.\"\n },\n {\n \"code\": null,\n \"e\": 29624,\n \"s\": 29532,\n \"text\": \"If ‘f’ is an isomorphic mapping, (G,o) will be isomorphic to the group (G’,o’) & we write :\"\n },\n {\n \"code\": null,\n \"e\": 29631,\n \"s\": 29624,\n \"text\": \"G ≅ G'\"\n },\n {\n \"code\": null,\n \"e\": 29670,\n \"s\": 29631,\n \"text\": \"Note : A mapping f: X -> Y is called :\"\n },\n {\n \"code\": null,\n \"e\": 29900,\n \"s\": 29670,\n \"text\": \"One – One – If x1 ≠x2, then f(x1) ≠ f(x2) or if f(x1) = f(x2) => x1 = x2. Where x1,x2 ∈ XOnto – If every element in the set Y is the f-image of at least one element of set X. Bijective – If it is one one & Onto.\"\n },\n {\n \"code\": null,\n \"e\": 29993,\n \"s\": 29900,\n \"text\": \"One – One – If x1 ≠x2, then f(x1) ≠ f(x2) or if f(x1) = f(x2) => x1 = x2. Where x1,x2 ∈ X\"\n },\n {\n \"code\": null,\n \"e\": 30090,\n \"s\": 29993,\n \"text\": \"Onto – If every element in the set Y is the f-image of at least one element of set X. \"\n },\n {\n \"code\": null,\n \"e\": 30132,\n \"s\": 30090,\n \"text\": \"Bijective – If it is one one & Onto.\"\n },\n {\n \"code\": null,\n \"e\": 30389,\n \"s\": 30132,\n \"text\": \"Example of Isomorphism Group –If G is the multiplicative group of 3 cube-root units , i.e., (G,o) = ( {1, w, w2 } , *) where w3 = 1 & G’ is an additive group of integers modulo 3 – (G’, o’) = ( {1,2,3) , +3). Then : G ≅ G’ , we say G is isomorphic to G’.\"\n },\n {\n \"code\": null,\n \"e\": 30536,\n \"s\": 30389,\n \"text\": \"The structure & order of both the tables are same. The mapping ‘f’ is defined as :f : G -> G’ in such a way that f(1) = 0 , f(w) = 1 & f(w2) = 2.\"\n },\n {\n \"code\": null,\n \"e\": 30715,\n \"s\": 30536,\n \"text\": \"Homomorphism property : f(aob) = f(a) o’ f(b) ∀ a,b ∈ G . Let us take a = w & b = 1LHS : f(a * b) = f( w * 1 ) = f(w) = 1.RHS : f(a) +3 f(b) = f(w) +3 f(1) = 1 + 0 = 1=>LHS = RHS\"\n },\n {\n \"code\": null,\n \"e\": 30781,\n \"s\": 30715,\n \"text\": \"This mapping f is one-one & onto also, therefore, a homomorphism.\"\n },\n {\n \"code\": null,\n \"e\": 30805,\n \"s\": 30781,\n \"text\": \"Engineering Mathematics\"\n },\n {\n \"code\": null,\n \"e\": 30813,\n \"s\": 30805,\n \"text\": \"GATE CS\"\n },\n {\n \"code\": null,\n \"e\": 30911,\n \"s\": 30813,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 30932,\n \"s\": 30911,\n \"text\": \"Activation Functions\"\n },\n {\n \"code\": null,\n \"e\": 30991,\n \"s\": 30932,\n \"text\": \"Difference between Propositional Logic and Predicate Logic\"\n },\n {\n \"code\": null,\n \"e\": 31016,\n \"s\": 30991,\n \"text\": \"Logic Notations in LaTeX\"\n },\n {\n \"code\": null,\n \"e\": 31077,\n \"s\": 31016,\n \"text\": \"Univariate, Bivariate and Multivariate data and its analysis\"\n },\n {\n \"code\": null,\n \"e\": 31084,\n \"s\": 31077,\n \"text\": \"Z-test\"\n },\n {\n \"code\": null,\n \"e\": 31104,\n \"s\": 31084,\n \"text\": \"Layers of OSI Model\"\n },\n {\n \"code\": null,\n \"e\": 31128,\n \"s\": 31104,\n \"text\": \"ACID Properties in DBMS\"\n },\n {\n \"code\": null,\n \"e\": 31141,\n \"s\": 31128,\n \"text\": \"TCP/IP Model\"\n },\n {\n \"code\": null,\n \"e\": 31168,\n \"s\": 31141,\n \"text\": \"Types of Operating Systems\"\n }\n]"}}},{"rowIdx":540,"cells":{"title":{"kind":"string","value":"Minimum operations to make XOR of array zero - GeeksforGeeks"},"text":{"kind":"string","value":"28 Apr, 2021\nWe are given an array of n elements. The task is to make XOR of whole array 0. We can do the following to achieve this. \nWe can select any one of the elements.After selecting an element, we can either increment or decrement it by 1.\nWe can select any one of the elements.\nAfter selecting an element, we can either increment or decrement it by 1.\nWe need to find the minimum number of increment/decrement operations required for the selected element to make the XOR sum of the whole array zero. \nExamples: \nInput : arr[] = {2, 4, 8}\nOutput : Element = 8, \n Operation required = 2\nExplanation : Select 8 as element and perform 2 \n time decrement on it. So that it\n became 6, Now our array is {2, 4, 6} \n whose XOR sum is 0.\n\nInput : arr[] = {1, 1, 1, 1}\nOutput : Element = 1, \n Operation required = 0\nExplanation : Select any of 1 and you have already\n your XOR sum = 0. So, no operation \n required.\nNaive Approach: Select an element and then find the XOR of the rest of the array. If that element became equals to XOR obtained then our XOR of the whole array should become zero. Now, our cost for that will be the absolute difference between the selected element and obtained XOR. This process of finding cost will be done for each element and thus resulting in Time Complexity of (n^2).Efficient Approach: Find the XOR of the whole array. Now, suppose we have selected element arr[i], so cost required for that element will be absolute(arr[i]-(XORsum^arr[i])). Calculating the minimum of these absolute values for each element will be our minimum required operation also the element corresponding to the minimum required operation will be our selected element. \nC++\nJava\nPython3\nC#\nPHP\nJavascript\n// CPP to find min cost to make// XOR of whole array zero#include using namespace std; // function to find min costvoid minCost(int arr[], int n){ int cost = INT_MAX; int element; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and element corresponding for (int i = 0; i < n; i++) { if (cost > abs((XOR ^ arr[i]) - arr[i])) { cost = abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } cout << \"Element = \" << element << endl; cout << \"Operation required = \" << abs(cost);} // driver programint main(){ int arr[] = { 2, 8, 4, 16 }; int n = sizeof(arr) / sizeof(arr[0]); minCost(arr, n); return 0;}\n// JAVA program to find min cost to make// XOR of whole array zeroimport java.lang.*; class GFG{ // function to find min cost static void minCost(int[] arr, int n) { int cost = Integer.MAX_VALUE; int element=0; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and element // corresponding for (int i = 0; i < n; i++) { if (cost > Math.abs((XOR ^ arr[i]) - arr[i])) { cost = Math.abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } System.out.println(\"Element = \" + element); System.out.println(\"Operation required = \"+ Math.abs(cost)); } // driver program public static void main (String[] args) { int[] arr = { 2, 8, 4, 16 }; int n = arr.length; minCost(arr, n); }}/* This code is contributed by Kriti Shukla */\n# python to find min cost to make# XOR of whole array zero # function to find min costdef minCost(arr,n): cost = 999999; # calculate XOR sum of array XOR = 0; for i in range(0, n): XOR ^= arr[i]; # find the min cost and element # corresponding for i in range(0,n): if (cost > abs((XOR ^ arr[i]) - arr[i])): cost = abs((XOR ^ arr[i]) - arr[i]) element = arr[i] print(\"Element = \", element) print(\"Operation required = \", abs(cost)) # driver programarr = [ 2, 8, 4, 16 ]n = len(arr)minCost(arr, n) # This code is contributed by Sam007\n// C# program to find min cost to// make XOR of whole array zerousing System; class GFG{ // function to find min cost static void minCost(int []arr, int n) { int cost = int.MaxValue; int element=0; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and // element corresponding for (int i = 0; i < n; i++) { if (cost > Math.Abs((XOR ^ arr[i]) - arr[i])) { cost = Math.Abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } Console.WriteLine(\"Element = \" + element); Console.Write(\"Operation required = \"+ Math.Abs(cost)); } // Driver program public static void Main () { int []arr = {2, 8, 4, 16}; int n = arr.Length; minCost(arr, n); }} // This code is contributed by nitin mittal.\n abs(($XOR ^ $arr[$i]) - $arr[$i])) { $cost = abs(($XOR ^ $arr[$i]) - $arr[$i]); $element = $arr[$i]; } } echo \"Element = \" , $element ,\"\\n\"; echo \"Operation required = \" , abs($cost);} // Driver Code$arr = array(2, 8, 4, 16) ;$n = count($arr);minCost($arr, $n); // This code is contributed by vt_m.?>\n\nOutput: \nElement = 16\nOperation required = 2\nTime Complexity : O(n)This article is contributed by Shivam Pradhan (anuj_charm). If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. \nnitin mittal\nvt_m\nSam007\nitsok\nBitwise-XOR\nArrays\nArrays\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nMaximum and minimum of an array using minimum number of comparisons\nTop 50 Array Coding Problems for Interviews\nStack Data Structure (Introduction and Program)\nIntroduction to Arrays\nMultidimensional Arrays in Java\nLinear Search\nLinked List vs Array\nGiven an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)\nPython | Using 2D arrays/lists the right way\nSearch an element in a sorted and rotated array"},"parsed":{"kind":"list like","value":[{"code":null,"e":26699,"s":26671,"text":"\n28 Apr, 2021"},{"code":null,"e":26821,"s":26699,"text":"We are given an array of n elements. The task is to make XOR of whole array 0. We can do the following to achieve this. "},{"code":null,"e":26933,"s":26821,"text":"We can select any one of the elements.After selecting an element, we can either increment or decrement it by 1."},{"code":null,"e":26972,"s":26933,"text":"We can select any one of the elements."},{"code":null,"e":27046,"s":26972,"text":"After selecting an element, we can either increment or decrement it by 1."},{"code":null,"e":27195,"s":27046,"text":"We need to find the minimum number of increment/decrement operations required for the selected element to make the XOR sum of the whole array zero. "},{"code":null,"e":27206,"s":27195,"text":"Examples: "},{"code":null,"e":27679,"s":27206,"text":"Input : arr[] = {2, 4, 8}\nOutput : Element = 8, \n Operation required = 2\nExplanation : Select 8 as element and perform 2 \n time decrement on it. So that it\n became 6, Now our array is {2, 4, 6} \n whose XOR sum is 0.\n\nInput : arr[] = {1, 1, 1, 1}\nOutput : Element = 1, \n Operation required = 0\nExplanation : Select any of 1 and you have already\n your XOR sum = 0. So, no operation \n required."},{"code":null,"e":28444,"s":27679,"text":"Naive Approach: Select an element and then find the XOR of the rest of the array. If that element became equals to XOR obtained then our XOR of the whole array should become zero. Now, our cost for that will be the absolute difference between the selected element and obtained XOR. This process of finding cost will be done for each element and thus resulting in Time Complexity of (n^2).Efficient Approach: Find the XOR of the whole array. Now, suppose we have selected element arr[i], so cost required for that element will be absolute(arr[i]-(XORsum^arr[i])). Calculating the minimum of these absolute values for each element will be our minimum required operation also the element corresponding to the minimum required operation will be our selected element. "},{"code":null,"e":28448,"s":28444,"text":"C++"},{"code":null,"e":28453,"s":28448,"text":"Java"},{"code":null,"e":28461,"s":28453,"text":"Python3"},{"code":null,"e":28464,"s":28461,"text":"C#"},{"code":null,"e":28468,"s":28464,"text":"PHP"},{"code":null,"e":28479,"s":28468,"text":"Javascript"},{"code":"// CPP to find min cost to make// XOR of whole array zero#include using namespace std; // function to find min costvoid minCost(int arr[], int n){ int cost = INT_MAX; int element; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and element corresponding for (int i = 0; i < n; i++) { if (cost > abs((XOR ^ arr[i]) - arr[i])) { cost = abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } cout << \"Element = \" << element << endl; cout << \"Operation required = \" << abs(cost);} // driver programint main(){ int arr[] = { 2, 8, 4, 16 }; int n = sizeof(arr) / sizeof(arr[0]); minCost(arr, n); return 0;}","e":29239,"s":28479,"text":null},{"code":"// JAVA program to find min cost to make// XOR of whole array zeroimport java.lang.*; class GFG{ // function to find min cost static void minCost(int[] arr, int n) { int cost = Integer.MAX_VALUE; int element=0; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and element // corresponding for (int i = 0; i < n; i++) { if (cost > Math.abs((XOR ^ arr[i]) - arr[i])) { cost = Math.abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } System.out.println(\"Element = \" + element); System.out.println(\"Operation required = \"+ Math.abs(cost)); } // driver program public static void main (String[] args) { int[] arr = { 2, 8, 4, 16 }; int n = arr.length; minCost(arr, n); }}/* This code is contributed by Kriti Shukla */","e":30288,"s":29239,"text":null},{"code":"# python to find min cost to make# XOR of whole array zero # function to find min costdef minCost(arr,n): cost = 999999; # calculate XOR sum of array XOR = 0; for i in range(0, n): XOR ^= arr[i]; # find the min cost and element # corresponding for i in range(0,n): if (cost > abs((XOR ^ arr[i]) - arr[i])): cost = abs((XOR ^ arr[i]) - arr[i]) element = arr[i] print(\"Element = \", element) print(\"Operation required = \", abs(cost)) # driver programarr = [ 2, 8, 4, 16 ]n = len(arr)minCost(arr, n) # This code is contributed by Sam007","e":30897,"s":30288,"text":null},{"code":"// C# program to find min cost to// make XOR of whole array zerousing System; class GFG{ // function to find min cost static void minCost(int []arr, int n) { int cost = int.MaxValue; int element=0; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and // element corresponding for (int i = 0; i < n; i++) { if (cost > Math.Abs((XOR ^ arr[i]) - arr[i])) { cost = Math.Abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } Console.WriteLine(\"Element = \" + element); Console.Write(\"Operation required = \"+ Math.Abs(cost)); } // Driver program public static void Main () { int []arr = {2, 8, 4, 16}; int n = arr.Length; minCost(arr, n); }} // This code is contributed by nitin mittal.","e":31857,"s":30897,"text":null},{"code":" abs(($XOR ^ $arr[$i]) - $arr[$i])) { $cost = abs(($XOR ^ $arr[$i]) - $arr[$i]); $element = $arr[$i]; } } echo \"Element = \" , $element ,\"\\n\"; echo \"Operation required = \" , abs($cost);} // Driver Code$arr = array(2, 8, 4, 16) ;$n = count($arr);minCost($arr, $n); // This code is contributed by vt_m.?>","e":32639,"s":31857,"text":null},{"code":"","e":33337,"s":32639,"text":null},{"code":null,"e":33346,"s":33337,"text":"Output: "},{"code":null,"e":33382,"s":33346,"text":"Element = 16\nOperation required = 2"},{"code":null,"e":33844,"s":33382,"text":"Time Complexity : O(n)This article is contributed by Shivam Pradhan (anuj_charm). If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "},{"code":null,"e":33857,"s":33844,"text":"nitin mittal"},{"code":null,"e":33862,"s":33857,"text":"vt_m"},{"code":null,"e":33869,"s":33862,"text":"Sam007"},{"code":null,"e":33875,"s":33869,"text":"itsok"},{"code":null,"e":33887,"s":33875,"text":"Bitwise-XOR"},{"code":null,"e":33894,"s":33887,"text":"Arrays"},{"code":null,"e":33901,"s":33894,"text":"Arrays"},{"code":null,"e":33999,"s":33901,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":34067,"s":33999,"text":"Maximum and minimum of an array using minimum number of comparisons"},{"code":null,"e":34111,"s":34067,"text":"Top 50 Array Coding Problems for Interviews"},{"code":null,"e":34159,"s":34111,"text":"Stack Data Structure (Introduction and Program)"},{"code":null,"e":34182,"s":34159,"text":"Introduction to Arrays"},{"code":null,"e":34214,"s":34182,"text":"Multidimensional Arrays in Java"},{"code":null,"e":34228,"s":34214,"text":"Linear Search"},{"code":null,"e":34249,"s":34228,"text":"Linked List vs Array"},{"code":null,"e":34334,"s":34249,"text":"Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)"},{"code":null,"e":34379,"s":34334,"text":"Python | Using 2D arrays/lists the right way"}],"string":"[\n {\n \"code\": null,\n \"e\": 26699,\n \"s\": 26671,\n \"text\": \"\\n28 Apr, 2021\"\n },\n {\n \"code\": null,\n \"e\": 26821,\n \"s\": 26699,\n \"text\": \"We are given an array of n elements. The task is to make XOR of whole array 0. We can do the following to achieve this. \"\n },\n {\n \"code\": null,\n \"e\": 26933,\n \"s\": 26821,\n \"text\": \"We can select any one of the elements.After selecting an element, we can either increment or decrement it by 1.\"\n },\n {\n \"code\": null,\n \"e\": 26972,\n \"s\": 26933,\n \"text\": \"We can select any one of the elements.\"\n },\n {\n \"code\": null,\n \"e\": 27046,\n \"s\": 26972,\n \"text\": \"After selecting an element, we can either increment or decrement it by 1.\"\n },\n {\n \"code\": null,\n \"e\": 27195,\n \"s\": 27046,\n \"text\": \"We need to find the minimum number of increment/decrement operations required for the selected element to make the XOR sum of the whole array zero. \"\n },\n {\n \"code\": null,\n \"e\": 27206,\n \"s\": 27195,\n \"text\": \"Examples: \"\n },\n {\n \"code\": null,\n \"e\": 27679,\n \"s\": 27206,\n \"text\": \"Input : arr[] = {2, 4, 8}\\nOutput : Element = 8, \\n Operation required = 2\\nExplanation : Select 8 as element and perform 2 \\n time decrement on it. So that it\\n became 6, Now our array is {2, 4, 6} \\n whose XOR sum is 0.\\n\\nInput : arr[] = {1, 1, 1, 1}\\nOutput : Element = 1, \\n Operation required = 0\\nExplanation : Select any of 1 and you have already\\n your XOR sum = 0. So, no operation \\n required.\"\n },\n {\n \"code\": null,\n \"e\": 28444,\n \"s\": 27679,\n \"text\": \"Naive Approach: Select an element and then find the XOR of the rest of the array. If that element became equals to XOR obtained then our XOR of the whole array should become zero. Now, our cost for that will be the absolute difference between the selected element and obtained XOR. This process of finding cost will be done for each element and thus resulting in Time Complexity of (n^2).Efficient Approach: Find the XOR of the whole array. Now, suppose we have selected element arr[i], so cost required for that element will be absolute(arr[i]-(XORsum^arr[i])). Calculating the minimum of these absolute values for each element will be our minimum required operation also the element corresponding to the minimum required operation will be our selected element. \"\n },\n {\n \"code\": null,\n \"e\": 28448,\n \"s\": 28444,\n \"text\": \"C++\"\n },\n {\n \"code\": null,\n \"e\": 28453,\n \"s\": 28448,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 28461,\n \"s\": 28453,\n \"text\": \"Python3\"\n },\n {\n \"code\": null,\n \"e\": 28464,\n \"s\": 28461,\n \"text\": \"C#\"\n },\n {\n \"code\": null,\n \"e\": 28468,\n \"s\": 28464,\n \"text\": \"PHP\"\n },\n {\n \"code\": null,\n \"e\": 28479,\n \"s\": 28468,\n \"text\": \"Javascript\"\n },\n {\n \"code\": \"// CPP to find min cost to make// XOR of whole array zero#include using namespace std; // function to find min costvoid minCost(int arr[], int n){ int cost = INT_MAX; int element; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and element corresponding for (int i = 0; i < n; i++) { if (cost > abs((XOR ^ arr[i]) - arr[i])) { cost = abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } cout << \\\"Element = \\\" << element << endl; cout << \\\"Operation required = \\\" << abs(cost);} // driver programint main(){ int arr[] = { 2, 8, 4, 16 }; int n = sizeof(arr) / sizeof(arr[0]); minCost(arr, n); return 0;}\",\n \"e\": 29239,\n \"s\": 28479,\n \"text\": null\n },\n {\n \"code\": \"// JAVA program to find min cost to make// XOR of whole array zeroimport java.lang.*; class GFG{ // function to find min cost static void minCost(int[] arr, int n) { int cost = Integer.MAX_VALUE; int element=0; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and element // corresponding for (int i = 0; i < n; i++) { if (cost > Math.abs((XOR ^ arr[i]) - arr[i])) { cost = Math.abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } System.out.println(\\\"Element = \\\" + element); System.out.println(\\\"Operation required = \\\"+ Math.abs(cost)); } // driver program public static void main (String[] args) { int[] arr = { 2, 8, 4, 16 }; int n = arr.length; minCost(arr, n); }}/* This code is contributed by Kriti Shukla */\",\n \"e\": 30288,\n \"s\": 29239,\n \"text\": null\n },\n {\n \"code\": \"# python to find min cost to make# XOR of whole array zero # function to find min costdef minCost(arr,n): cost = 999999; # calculate XOR sum of array XOR = 0; for i in range(0, n): XOR ^= arr[i]; # find the min cost and element # corresponding for i in range(0,n): if (cost > abs((XOR ^ arr[i]) - arr[i])): cost = abs((XOR ^ arr[i]) - arr[i]) element = arr[i] print(\\\"Element = \\\", element) print(\\\"Operation required = \\\", abs(cost)) # driver programarr = [ 2, 8, 4, 16 ]n = len(arr)minCost(arr, n) # This code is contributed by Sam007\",\n \"e\": 30897,\n \"s\": 30288,\n \"text\": null\n },\n {\n \"code\": \"// C# program to find min cost to// make XOR of whole array zerousing System; class GFG{ // function to find min cost static void minCost(int []arr, int n) { int cost = int.MaxValue; int element=0; // calculate XOR sum of array int XOR = 0; for (int i = 0; i < n; i++) XOR ^= arr[i]; // find the min cost and // element corresponding for (int i = 0; i < n; i++) { if (cost > Math.Abs((XOR ^ arr[i]) - arr[i])) { cost = Math.Abs((XOR ^ arr[i]) - arr[i]); element = arr[i]; } } Console.WriteLine(\\\"Element = \\\" + element); Console.Write(\\\"Operation required = \\\"+ Math.Abs(cost)); } // Driver program public static void Main () { int []arr = {2, 8, 4, 16}; int n = arr.Length; minCost(arr, n); }} // This code is contributed by nitin mittal.\",\n \"e\": 31857,\n \"s\": 30897,\n \"text\": null\n },\n {\n \"code\": \" abs(($XOR ^ $arr[$i]) - $arr[$i])) { $cost = abs(($XOR ^ $arr[$i]) - $arr[$i]); $element = $arr[$i]; } } echo \\\"Element = \\\" , $element ,\\\"\\\\n\\\"; echo \\\"Operation required = \\\" , abs($cost);} // Driver Code$arr = array(2, 8, 4, 16) ;$n = count($arr);minCost($arr, $n); // This code is contributed by vt_m.?>\",\n \"e\": 32639,\n \"s\": 31857,\n \"text\": null\n },\n {\n \"code\": \"\",\n \"e\": 33337,\n \"s\": 32639,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 33346,\n \"s\": 33337,\n \"text\": \"Output: \"\n },\n {\n \"code\": null,\n \"e\": 33382,\n \"s\": 33346,\n \"text\": \"Element = 16\\nOperation required = 2\"\n },\n {\n \"code\": null,\n \"e\": 33844,\n \"s\": 33382,\n \"text\": \"Time Complexity : O(n)This article is contributed by Shivam Pradhan (anuj_charm). If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. \"\n },\n {\n \"code\": null,\n \"e\": 33857,\n \"s\": 33844,\n \"text\": \"nitin mittal\"\n },\n {\n \"code\": null,\n \"e\": 33862,\n \"s\": 33857,\n \"text\": \"vt_m\"\n },\n {\n \"code\": null,\n \"e\": 33869,\n \"s\": 33862,\n \"text\": \"Sam007\"\n },\n {\n \"code\": null,\n \"e\": 33875,\n \"s\": 33869,\n \"text\": \"itsok\"\n },\n {\n \"code\": null,\n \"e\": 33887,\n \"s\": 33875,\n \"text\": \"Bitwise-XOR\"\n },\n {\n \"code\": null,\n \"e\": 33894,\n \"s\": 33887,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 33901,\n \"s\": 33894,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 33999,\n \"s\": 33901,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 34067,\n \"s\": 33999,\n \"text\": \"Maximum and minimum of an array using minimum number of comparisons\"\n },\n {\n \"code\": null,\n \"e\": 34111,\n \"s\": 34067,\n \"text\": \"Top 50 Array Coding Problems for Interviews\"\n },\n {\n \"code\": null,\n \"e\": 34159,\n \"s\": 34111,\n \"text\": \"Stack Data Structure (Introduction and Program)\"\n },\n {\n \"code\": null,\n \"e\": 34182,\n \"s\": 34159,\n \"text\": \"Introduction to Arrays\"\n },\n {\n \"code\": null,\n \"e\": 34214,\n \"s\": 34182,\n \"text\": \"Multidimensional Arrays in Java\"\n },\n {\n \"code\": null,\n \"e\": 34228,\n \"s\": 34214,\n \"text\": \"Linear Search\"\n },\n {\n \"code\": null,\n \"e\": 34249,\n \"s\": 34228,\n \"text\": \"Linked List vs Array\"\n },\n {\n \"code\": null,\n \"e\": 34334,\n \"s\": 34249,\n \"text\": \"Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)\"\n },\n {\n \"code\": null,\n \"e\": 34379,\n \"s\": 34334,\n \"text\": \"Python | Using 2D arrays/lists the right way\"\n }\n]"}}},{"rowIdx":541,"cells":{"title":{"kind":"string","value":"Top 5 Easter Eggs in Python - GeeksforGeeks"},"text":{"kind":"string","value":"22 Jul, 2021\nPython is really an interesting language with very good documentation. In this article, we will go through some fun stuff that isn’t documented and so considered as Easter eggs of Python.\nMost of the programmers should have started your programming journey from printing “Hello World!”. Would you believe that Python has a secret module to print “Hello World” and the name of the module is __hello__\nPython3\nimport __hello__ \nOutput:\nHello world!\nIf you feel bored typing code all day, then check the antigravity module in Python which redirects you to\nhttps://xkcd.com/353/ a web-comic.\nPython3\n# redirects you to https://xkcd.com/353/import antigravity\n“Zen of python” is a guide to Python design principles. It consists of 19 design principles and it is written by an American software developer Tim Peters. This is also by far the only ‘official’ Easter egg that is stated as an ‘Easter egg’ in Python Developer’s Guide. You can see them by importing the module “this”.\nPython3\nimport this\nOutput:\nBeautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.Special cases aren’t special enough to break the rules.Although practicality beats purity.Errors should never pass silently.Unless explicitly silenced.In the face of ambiguity, refuse the temptation to guess.There should be one– and preferably only one –obvious way to do it.Although that way may not be obvious at first unless you’re Dutch.Now is better than never.Although never is often better than *right* now.If the implementation is hard to explain, it’s a bad idea.If the implementation is easy to explain, it may be a good idea.Namespaces are one honking great idea — let’s do more of those!\nRecognized that the != inequality operator in Python 3.0 was a horrible, finger pain-inducing mistake, the FLUFL reinstates the <> diamond operator as the sole spelling.\nPython3\nfrom __future__ import barry_as_FLUFL 1 <> 21 != 2\nOutput:\nTrue\nSyntaxError: with Barry as BDFL, use '<>' instead of '!='\nUnlike most of the language, Python uses indentation instead of curly braces “{ }”. While making a transition from languages like C++ or JAVA to Python it is a bit difficult to adapt to indentation. Thus if we try to use braces using __future__ module, Python gives a funny reply “not a chance”.\nPython3\nfrom __future__ import braces\n Output:\nFile \"\", line 1\nSyntaxError: not a chance\npython-utility\nPython\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nHow to Install PIP on Windows ?\nCheck if element exists in list in Python\nHow To Convert Python Dictionary To JSON?\nPython Classes and Objects\nHow to drop one or multiple columns in Pandas Dataframe\nDefaultdict in Python\nPython | Get unique values from a list\nPython | os.path.join() method\nCreate a directory in Python\nPython | Pandas dataframe.groupby()"},"parsed":{"kind":"list like","value":[{"code":null,"e":25537,"s":25509,"text":"\n22 Jul, 2021"},{"code":null,"e":25725,"s":25537,"text":"Python is really an interesting language with very good documentation. In this article, we will go through some fun stuff that isn’t documented and so considered as Easter eggs of Python."},{"code":null,"e":25937,"s":25725,"text":"Most of the programmers should have started your programming journey from printing “Hello World!”. Would you believe that Python has a secret module to print “Hello World” and the name of the module is __hello__"},{"code":null,"e":25945,"s":25937,"text":"Python3"},{"code":"import __hello__ ","e":25963,"s":25945,"text":null},{"code":null,"e":25971,"s":25963,"text":"Output:"},{"code":null,"e":25984,"s":25971,"text":"Hello world!"},{"code":null,"e":26090,"s":25984,"text":"If you feel bored typing code all day, then check the antigravity module in Python which redirects you to"},{"code":null,"e":26125,"s":26090,"text":"https://xkcd.com/353/ a web-comic."},{"code":null,"e":26133,"s":26125,"text":"Python3"},{"code":"# redirects you to https://xkcd.com/353/import antigravity","e":26192,"s":26133,"text":null},{"code":null,"e":26513,"s":26192,"text":"“Zen of python” is a guide to Python design principles. It consists of 19 design principles and it is written by an American software developer Tim Peters. This is also by far the only ‘official’ Easter egg that is stated as an ‘Easter egg’ in Python Developer’s Guide. You can see them by importing the module “this”."},{"code":null,"e":26521,"s":26513,"text":"Python3"},{"code":"import this","e":26533,"s":26521,"text":null},{"code":null,"e":26541,"s":26533,"text":"Output:"},{"code":null,"e":27343,"s":26541,"text":"Beautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.Special cases aren’t special enough to break the rules.Although practicality beats purity.Errors should never pass silently.Unless explicitly silenced.In the face of ambiguity, refuse the temptation to guess.There should be one– and preferably only one –obvious way to do it.Although that way may not be obvious at first unless you’re Dutch.Now is better than never.Although never is often better than *right* now.If the implementation is hard to explain, it’s a bad idea.If the implementation is easy to explain, it may be a good idea.Namespaces are one honking great idea — let’s do more of those!"},{"code":null,"e":27513,"s":27343,"text":"Recognized that the != inequality operator in Python 3.0 was a horrible, finger pain-inducing mistake, the FLUFL reinstates the <> diamond operator as the sole spelling."},{"code":null,"e":27521,"s":27513,"text":"Python3"},{"code":"from __future__ import barry_as_FLUFL 1 <> 21 != 2","e":27573,"s":27521,"text":null},{"code":null,"e":27581,"s":27573,"text":"Output:"},{"code":null,"e":27644,"s":27581,"text":"True\nSyntaxError: with Barry as BDFL, use '<>' instead of '!='"},{"code":null,"e":27940,"s":27644,"text":"Unlike most of the language, Python uses indentation instead of curly braces “{ }”. While making a transition from languages like C++ or JAVA to Python it is a bit difficult to adapt to indentation. Thus if we try to use braces using __future__ module, Python gives a funny reply “not a chance”."},{"code":null,"e":27948,"s":27940,"text":"Python3"},{"code":"from __future__ import braces","e":27978,"s":27948,"text":null},{"code":null,"e":27987,"s":27978,"text":" Output:"},{"code":null,"e":28036,"s":27987,"text":"File \"\", line 1\nSyntaxError: not a chance"},{"code":null,"e":28051,"s":28036,"text":"python-utility"},{"code":null,"e":28058,"s":28051,"text":"Python"},{"code":null,"e":28156,"s":28058,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":28188,"s":28156,"text":"How to Install PIP on Windows ?"},{"code":null,"e":28230,"s":28188,"text":"Check if element exists in list in Python"},{"code":null,"e":28272,"s":28230,"text":"How To Convert Python Dictionary To JSON?"},{"code":null,"e":28299,"s":28272,"text":"Python Classes and Objects"},{"code":null,"e":28355,"s":28299,"text":"How to drop one or multiple columns in Pandas Dataframe"},{"code":null,"e":28377,"s":28355,"text":"Defaultdict in Python"},{"code":null,"e":28416,"s":28377,"text":"Python | Get unique values from a list"},{"code":null,"e":28447,"s":28416,"text":"Python | os.path.join() method"},{"code":null,"e":28476,"s":28447,"text":"Create a directory in Python"}],"string":"[\n {\n \"code\": null,\n \"e\": 25537,\n \"s\": 25509,\n \"text\": \"\\n22 Jul, 2021\"\n },\n {\n \"code\": null,\n \"e\": 25725,\n \"s\": 25537,\n \"text\": \"Python is really an interesting language with very good documentation. In this article, we will go through some fun stuff that isn’t documented and so considered as Easter eggs of Python.\"\n },\n {\n \"code\": null,\n \"e\": 25937,\n \"s\": 25725,\n \"text\": \"Most of the programmers should have started your programming journey from printing “Hello World!”. Would you believe that Python has a secret module to print “Hello World” and the name of the module is __hello__\"\n },\n {\n \"code\": null,\n \"e\": 25945,\n \"s\": 25937,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"import __hello__ \",\n \"e\": 25963,\n \"s\": 25945,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25971,\n \"s\": 25963,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25984,\n \"s\": 25971,\n \"text\": \"Hello world!\"\n },\n {\n \"code\": null,\n \"e\": 26090,\n \"s\": 25984,\n \"text\": \"If you feel bored typing code all day, then check the antigravity module in Python which redirects you to\"\n },\n {\n \"code\": null,\n \"e\": 26125,\n \"s\": 26090,\n \"text\": \"https://xkcd.com/353/ a web-comic.\"\n },\n {\n \"code\": null,\n \"e\": 26133,\n \"s\": 26125,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# redirects you to https://xkcd.com/353/import antigravity\",\n \"e\": 26192,\n \"s\": 26133,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26513,\n \"s\": 26192,\n \"text\": \"“Zen of python” is a guide to Python design principles. It consists of 19 design principles and it is written by an American software developer Tim Peters. This is also by far the only ‘official’ Easter egg that is stated as an ‘Easter egg’ in Python Developer’s Guide. You can see them by importing the module “this”.\"\n },\n {\n \"code\": null,\n \"e\": 26521,\n \"s\": 26513,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"import this\",\n \"e\": 26533,\n \"s\": 26521,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26541,\n \"s\": 26533,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 27343,\n \"s\": 26541,\n \"text\": \"Beautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.Special cases aren’t special enough to break the rules.Although practicality beats purity.Errors should never pass silently.Unless explicitly silenced.In the face of ambiguity, refuse the temptation to guess.There should be one– and preferably only one –obvious way to do it.Although that way may not be obvious at first unless you’re Dutch.Now is better than never.Although never is often better than *right* now.If the implementation is hard to explain, it’s a bad idea.If the implementation is easy to explain, it may be a good idea.Namespaces are one honking great idea — let’s do more of those!\"\n },\n {\n \"code\": null,\n \"e\": 27513,\n \"s\": 27343,\n \"text\": \"Recognized that the != inequality operator in Python 3.0 was a horrible, finger pain-inducing mistake, the FLUFL reinstates the <> diamond operator as the sole spelling.\"\n },\n {\n \"code\": null,\n \"e\": 27521,\n \"s\": 27513,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"from __future__ import barry_as_FLUFL 1 <> 21 != 2\",\n \"e\": 27573,\n \"s\": 27521,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27581,\n \"s\": 27573,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 27644,\n \"s\": 27581,\n \"text\": \"True\\nSyntaxError: with Barry as BDFL, use '<>' instead of '!='\"\n },\n {\n \"code\": null,\n \"e\": 27940,\n \"s\": 27644,\n \"text\": \"Unlike most of the language, Python uses indentation instead of curly braces “{ }”. While making a transition from languages like C++ or JAVA to Python it is a bit difficult to adapt to indentation. Thus if we try to use braces using __future__ module, Python gives a funny reply “not a chance”.\"\n },\n {\n \"code\": null,\n \"e\": 27948,\n \"s\": 27940,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"from __future__ import braces\",\n \"e\": 27978,\n \"s\": 27948,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27987,\n \"s\": 27978,\n \"text\": \" Output:\"\n },\n {\n \"code\": null,\n \"e\": 28036,\n \"s\": 27987,\n \"text\": \"File \\\"\\\", line 1\\nSyntaxError: not a chance\"\n },\n {\n \"code\": null,\n \"e\": 28051,\n \"s\": 28036,\n \"text\": \"python-utility\"\n },\n {\n \"code\": null,\n \"e\": 28058,\n \"s\": 28051,\n \"text\": \"Python\"\n },\n {\n \"code\": null,\n \"e\": 28156,\n \"s\": 28058,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 28188,\n \"s\": 28156,\n \"text\": \"How to Install PIP on Windows ?\"\n },\n {\n \"code\": null,\n \"e\": 28230,\n \"s\": 28188,\n \"text\": \"Check if element exists in list in Python\"\n },\n {\n \"code\": null,\n \"e\": 28272,\n \"s\": 28230,\n \"text\": \"How To Convert Python Dictionary To JSON?\"\n },\n {\n \"code\": null,\n \"e\": 28299,\n \"s\": 28272,\n \"text\": \"Python Classes and Objects\"\n },\n {\n \"code\": null,\n \"e\": 28355,\n \"s\": 28299,\n \"text\": \"How to drop one or multiple columns in Pandas Dataframe\"\n },\n {\n \"code\": null,\n \"e\": 28377,\n \"s\": 28355,\n \"text\": \"Defaultdict in Python\"\n },\n {\n \"code\": null,\n \"e\": 28416,\n \"s\": 28377,\n \"text\": \"Python | Get unique values from a list\"\n },\n {\n \"code\": null,\n \"e\": 28447,\n \"s\": 28416,\n \"text\": \"Python | os.path.join() method\"\n },\n {\n \"code\": null,\n \"e\": 28476,\n \"s\": 28447,\n \"text\": \"Create a directory in Python\"\n }\n]"}}},{"rowIdx":542,"cells":{"title":{"kind":"string","value":"AngularJS | ng-disabled Directive - GeeksforGeeks"},"text":{"kind":"string","value":"31 Aug, 2021\nThe ng-disabled Directive in AngularJS is used to enable or disable HTML elements. If the expression inside the ng-disabled attribute returns true then the form field will be disabled or vice versa. It is usually applied on form field (input, select, button, etc). \nSyntax: \n Contents... \nExample 1: This example uses ng-disabled Directive to disable the button. \nhtml\n ng-disabled Directive

GeeksforGeeks

ng-disabled Directive

\nOutput: Before clicking the button: \nAfter clicking the button: \n \nExample 2: This example uses ng-disabled Directive to enable and disable button using checkbox. \nhtml\n ng-disabled Directive

GeeksforGeeks

ng-disabled Directive

Check it

\nOutput: Before clicking the button: \nAfter clicking the button: \nSupported Browser:\nGoogle Chrome\nMicrosoft Edge\nFirefox\nOpera\nSafari\nysachin2314\nAngularJS-Directives\nAngularJS\nWeb Technologies\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nAngular PrimeNG Dropdown Component\nAuth Guards in Angular 9/10/11\nAngular PrimeNG Calendar Component\nWhat is AOT and JIT Compiler in Angular ?\nHow to bundle an Angular app for production?\nRemove elements from a JavaScript Array\nInstallation of Node.js on Linux\nConvert a string to an integer in JavaScript\nHow to fetch data from an API in ReactJS ?\nHow to insert spaces/tabs in text using HTML/CSS?"},"parsed":{"kind":"list like","value":[{"code":null,"e":25518,"s":25490,"text":"\n31 Aug, 2021"},{"code":null,"e":25784,"s":25518,"text":"The ng-disabled Directive in AngularJS is used to enable or disable HTML elements. If the expression inside the ng-disabled attribute returns true then the form field will be disabled or vice versa. It is usually applied on form field (input, select, button, etc). "},{"code":null,"e":25794,"s":25784,"text":"Syntax: "},{"code":null,"e":25852,"s":25794,"text":" Contents... "},{"code":null,"e":25928,"s":25852,"text":"Example 1: This example uses ng-disabled Directive to disable the button. "},{"code":null,"e":25933,"s":25928,"text":"html"},{"code":" ng-disabled Directive

GeeksforGeeks

ng-disabled Directive

","e":26794,"s":25933,"text":null},{"code":null,"e":26832,"s":26794,"text":"Output: Before clicking the button: "},{"code":null,"e":26861,"s":26832,"text":"After clicking the button: "},{"code":null,"e":26961,"s":26863,"text":"Example 2: This example uses ng-disabled Directive to enable and disable button using checkbox. "},{"code":null,"e":26966,"s":26961,"text":"html"},{"code":" ng-disabled Directive

GeeksforGeeks

ng-disabled Directive

Check it

","e":27809,"s":26966,"text":null},{"code":null,"e":27847,"s":27809,"text":"Output: Before clicking the button: "},{"code":null,"e":27876,"s":27847,"text":"After clicking the button: "},{"code":null,"e":27895,"s":27876,"text":"Supported Browser:"},{"code":null,"e":27909,"s":27895,"text":"Google Chrome"},{"code":null,"e":27924,"s":27909,"text":"Microsoft Edge"},{"code":null,"e":27932,"s":27924,"text":"Firefox"},{"code":null,"e":27938,"s":27932,"text":"Opera"},{"code":null,"e":27945,"s":27938,"text":"Safari"},{"code":null,"e":27957,"s":27945,"text":"ysachin2314"},{"code":null,"e":27978,"s":27957,"text":"AngularJS-Directives"},{"code":null,"e":27988,"s":27978,"text":"AngularJS"},{"code":null,"e":28005,"s":27988,"text":"Web Technologies"},{"code":null,"e":28103,"s":28005,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":28138,"s":28103,"text":"Angular PrimeNG Dropdown Component"},{"code":null,"e":28169,"s":28138,"text":"Auth Guards in Angular 9/10/11"},{"code":null,"e":28204,"s":28169,"text":"Angular PrimeNG Calendar Component"},{"code":null,"e":28246,"s":28204,"text":"What is AOT and JIT Compiler in Angular ?"},{"code":null,"e":28291,"s":28246,"text":"How to bundle an Angular app for production?"},{"code":null,"e":28331,"s":28291,"text":"Remove elements from a JavaScript Array"},{"code":null,"e":28364,"s":28331,"text":"Installation of Node.js on Linux"},{"code":null,"e":28409,"s":28364,"text":"Convert a string to an integer in JavaScript"},{"code":null,"e":28452,"s":28409,"text":"How to fetch data from an API in ReactJS ?"}],"string":"[\n {\n \"code\": null,\n \"e\": 25518,\n \"s\": 25490,\n \"text\": \"\\n31 Aug, 2021\"\n },\n {\n \"code\": null,\n \"e\": 25784,\n \"s\": 25518,\n \"text\": \"The ng-disabled Directive in AngularJS is used to enable or disable HTML elements. If the expression inside the ng-disabled attribute returns true then the form field will be disabled or vice versa. It is usually applied on form field (input, select, button, etc). \"\n },\n {\n \"code\": null,\n \"e\": 25794,\n \"s\": 25784,\n \"text\": \"Syntax: \"\n },\n {\n \"code\": null,\n \"e\": 25852,\n \"s\": 25794,\n \"text\": \" Contents... \"\n },\n {\n \"code\": null,\n \"e\": 25928,\n \"s\": 25852,\n \"text\": \"Example 1: This example uses ng-disabled Directive to disable the button. \"\n },\n {\n \"code\": null,\n \"e\": 25933,\n \"s\": 25928,\n \"text\": \"html\"\n },\n {\n \"code\": \" ng-disabled Directive

GeeksforGeeks

ng-disabled Directive

\",\n \"e\": 26794,\n \"s\": 25933,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26832,\n \"s\": 26794,\n \"text\": \"Output: Before clicking the button: \"\n },\n {\n \"code\": null,\n \"e\": 26861,\n \"s\": 26832,\n \"text\": \"After clicking the button: \"\n },\n {\n \"code\": null,\n \"e\": 26961,\n \"s\": 26863,\n \"text\": \"Example 2: This example uses ng-disabled Directive to enable and disable button using checkbox. \"\n },\n {\n \"code\": null,\n \"e\": 26966,\n \"s\": 26961,\n \"text\": \"html\"\n },\n {\n \"code\": \" ng-disabled Directive

GeeksforGeeks

ng-disabled Directive

Check it

\",\n \"e\": 27809,\n \"s\": 26966,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27847,\n \"s\": 27809,\n \"text\": \"Output: Before clicking the button: \"\n },\n {\n \"code\": null,\n \"e\": 27876,\n \"s\": 27847,\n \"text\": \"After clicking the button: \"\n },\n {\n \"code\": null,\n \"e\": 27895,\n \"s\": 27876,\n \"text\": \"Supported Browser:\"\n },\n {\n \"code\": null,\n \"e\": 27909,\n \"s\": 27895,\n \"text\": \"Google Chrome\"\n },\n {\n \"code\": null,\n \"e\": 27924,\n \"s\": 27909,\n \"text\": \"Microsoft Edge\"\n },\n {\n \"code\": null,\n \"e\": 27932,\n \"s\": 27924,\n \"text\": \"Firefox\"\n },\n {\n \"code\": null,\n \"e\": 27938,\n \"s\": 27932,\n \"text\": \"Opera\"\n },\n {\n \"code\": null,\n \"e\": 27945,\n \"s\": 27938,\n \"text\": \"Safari\"\n },\n {\n \"code\": null,\n \"e\": 27957,\n \"s\": 27945,\n \"text\": \"ysachin2314\"\n },\n {\n \"code\": null,\n \"e\": 27978,\n \"s\": 27957,\n \"text\": \"AngularJS-Directives\"\n },\n {\n \"code\": null,\n \"e\": 27988,\n \"s\": 27978,\n \"text\": \"AngularJS\"\n },\n {\n \"code\": null,\n \"e\": 28005,\n \"s\": 27988,\n \"text\": \"Web Technologies\"\n },\n {\n \"code\": null,\n \"e\": 28103,\n \"s\": 28005,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 28138,\n \"s\": 28103,\n \"text\": \"Angular PrimeNG Dropdown Component\"\n },\n {\n \"code\": null,\n \"e\": 28169,\n \"s\": 28138,\n \"text\": \"Auth Guards in Angular 9/10/11\"\n },\n {\n \"code\": null,\n \"e\": 28204,\n \"s\": 28169,\n \"text\": \"Angular PrimeNG Calendar Component\"\n },\n {\n \"code\": null,\n \"e\": 28246,\n \"s\": 28204,\n \"text\": \"What is AOT and JIT Compiler in Angular ?\"\n },\n {\n \"code\": null,\n \"e\": 28291,\n \"s\": 28246,\n \"text\": \"How to bundle an Angular app for production?\"\n },\n {\n \"code\": null,\n \"e\": 28331,\n \"s\": 28291,\n \"text\": \"Remove elements from a JavaScript Array\"\n },\n {\n \"code\": null,\n \"e\": 28364,\n \"s\": 28331,\n \"text\": \"Installation of Node.js on Linux\"\n },\n {\n \"code\": null,\n \"e\": 28409,\n \"s\": 28364,\n \"text\": \"Convert a string to an integer in JavaScript\"\n },\n {\n \"code\": null,\n \"e\": 28452,\n \"s\": 28409,\n \"text\": \"How to fetch data from an API in ReactJS ?\"\n }\n]"}}},{"rowIdx":543,"cells":{"title":{"kind":"string","value":"Program to convert Java list of Strings to Seq in Scala - GeeksforGeeks"},"text":{"kind":"string","value":"14 Jan, 2020\nA java list of Strings can be converted to sequence in Scala by utilizing toSeq method of Java in Scala. Here, we need to import Scala’s JavaConversions object in order to make this conversions work else an error will occur.Now, lets see some examples and then discuss how it works in details.Example:1#\n// Scala program to convert Java list // to Sequence in Scala // Importing Scala's JavaConversions objectimport scala.collection.JavaConversions._ // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating list of Strings in Java val list = new java.util.ArrayList[String]() // Adding Strings to the list list.add(\"geeks\") list.add(\"cs\") list.add(\"portal\") // Converting list to Sequence val seq= list.toSeq // Displays seq println(seq) }}\nBuffer(geeks, cs, portal)\n\nExample:2#\n// Scala program to convert Java list // to Sequence in Scala // Importing Scala's JavaConversions objectimport scala.collection.JavaConversions._ // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating list of Strings in Java val list = new java.util.ArrayList[String]() // Adding Strings to the list list.add(\"i\") list.add(\"am an\") list.add(\"author\") // Converting list to Sequence val seq= list.toSeq // Displays seq println(seq) }}\nBuffer(i, am an, author)\n\nScala\nscala-collection\nScala-Method\nScala\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nInheritance in Scala\nScala | Traits\nScala ListBuffer\nScala | Case Class and Case Object\nHello World in Scala\nScala | Functions - Basics\nScala | Decision Making (if, if-else, Nested if-else, if-else if)\nScala List map() method with example\nComments In Scala\nScala | Try-Catch Exceptions"},"parsed":{"kind":"list like","value":[{"code":null,"e":25301,"s":25273,"text":"\n14 Jan, 2020"},{"code":null,"e":25605,"s":25301,"text":"A java list of Strings can be converted to sequence in Scala by utilizing toSeq method of Java in Scala. Here, we need to import Scala’s JavaConversions object in order to make this conversions work else an error will occur.Now, lets see some examples and then discuss how it works in details.Example:1#"},{"code":"// Scala program to convert Java list // to Sequence in Scala // Importing Scala's JavaConversions objectimport scala.collection.JavaConversions._ // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating list of Strings in Java val list = new java.util.ArrayList[String]() // Adding Strings to the list list.add(\"geeks\") list.add(\"cs\") list.add(\"portal\") // Converting list to Sequence val seq= list.toSeq // Displays seq println(seq) }}","e":26206,"s":25605,"text":null},{"code":null,"e":26233,"s":26206,"text":"Buffer(geeks, cs, portal)\n"},{"code":null,"e":26244,"s":26233,"text":"Example:2#"},{"code":"// Scala program to convert Java list // to Sequence in Scala // Importing Scala's JavaConversions objectimport scala.collection.JavaConversions._ // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating list of Strings in Java val list = new java.util.ArrayList[String]() // Adding Strings to the list list.add(\"i\") list.add(\"am an\") list.add(\"author\") // Converting list to Sequence val seq= list.toSeq // Displays seq println(seq) }}","e":26848,"s":26244,"text":null},{"code":null,"e":26874,"s":26848,"text":"Buffer(i, am an, author)\n"},{"code":null,"e":26880,"s":26874,"text":"Scala"},{"code":null,"e":26897,"s":26880,"text":"scala-collection"},{"code":null,"e":26910,"s":26897,"text":"Scala-Method"},{"code":null,"e":26916,"s":26910,"text":"Scala"},{"code":null,"e":27014,"s":26916,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":27035,"s":27014,"text":"Inheritance in Scala"},{"code":null,"e":27050,"s":27035,"text":"Scala | Traits"},{"code":null,"e":27067,"s":27050,"text":"Scala ListBuffer"},{"code":null,"e":27102,"s":27067,"text":"Scala | Case Class and Case Object"},{"code":null,"e":27123,"s":27102,"text":"Hello World in Scala"},{"code":null,"e":27150,"s":27123,"text":"Scala | Functions - Basics"},{"code":null,"e":27216,"s":27150,"text":"Scala | Decision Making (if, if-else, Nested if-else, if-else if)"},{"code":null,"e":27253,"s":27216,"text":"Scala List map() method with example"},{"code":null,"e":27271,"s":27253,"text":"Comments In Scala"}],"string":"[\n {\n \"code\": null,\n \"e\": 25301,\n \"s\": 25273,\n \"text\": \"\\n14 Jan, 2020\"\n },\n {\n \"code\": null,\n \"e\": 25605,\n \"s\": 25301,\n \"text\": \"A java list of Strings can be converted to sequence in Scala by utilizing toSeq method of Java in Scala. Here, we need to import Scala’s JavaConversions object in order to make this conversions work else an error will occur.Now, lets see some examples and then discuss how it works in details.Example:1#\"\n },\n {\n \"code\": \"// Scala program to convert Java list // to Sequence in Scala // Importing Scala's JavaConversions objectimport scala.collection.JavaConversions._ // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating list of Strings in Java val list = new java.util.ArrayList[String]() // Adding Strings to the list list.add(\\\"geeks\\\") list.add(\\\"cs\\\") list.add(\\\"portal\\\") // Converting list to Sequence val seq= list.toSeq // Displays seq println(seq) }}\",\n \"e\": 26206,\n \"s\": 25605,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26233,\n \"s\": 26206,\n \"text\": \"Buffer(geeks, cs, portal)\\n\"\n },\n {\n \"code\": null,\n \"e\": 26244,\n \"s\": 26233,\n \"text\": \"Example:2#\"\n },\n {\n \"code\": \"// Scala program to convert Java list // to Sequence in Scala // Importing Scala's JavaConversions objectimport scala.collection.JavaConversions._ // Creating objectobject GfG{ // Main method def main(args:Array[String]) { // Creating list of Strings in Java val list = new java.util.ArrayList[String]() // Adding Strings to the list list.add(\\\"i\\\") list.add(\\\"am an\\\") list.add(\\\"author\\\") // Converting list to Sequence val seq= list.toSeq // Displays seq println(seq) }}\",\n \"e\": 26848,\n \"s\": 26244,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26874,\n \"s\": 26848,\n \"text\": \"Buffer(i, am an, author)\\n\"\n },\n {\n \"code\": null,\n \"e\": 26880,\n \"s\": 26874,\n \"text\": \"Scala\"\n },\n {\n \"code\": null,\n \"e\": 26897,\n \"s\": 26880,\n \"text\": \"scala-collection\"\n },\n {\n \"code\": null,\n \"e\": 26910,\n \"s\": 26897,\n \"text\": \"Scala-Method\"\n },\n {\n \"code\": null,\n \"e\": 26916,\n \"s\": 26910,\n \"text\": \"Scala\"\n },\n {\n \"code\": null,\n \"e\": 27014,\n \"s\": 26916,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 27035,\n \"s\": 27014,\n \"text\": \"Inheritance in Scala\"\n },\n {\n \"code\": null,\n \"e\": 27050,\n \"s\": 27035,\n \"text\": \"Scala | Traits\"\n },\n {\n \"code\": null,\n \"e\": 27067,\n \"s\": 27050,\n \"text\": \"Scala ListBuffer\"\n },\n {\n \"code\": null,\n \"e\": 27102,\n \"s\": 27067,\n \"text\": \"Scala | Case Class and Case Object\"\n },\n {\n \"code\": null,\n \"e\": 27123,\n \"s\": 27102,\n \"text\": \"Hello World in Scala\"\n },\n {\n \"code\": null,\n \"e\": 27150,\n \"s\": 27123,\n \"text\": \"Scala | Functions - Basics\"\n },\n {\n \"code\": null,\n \"e\": 27216,\n \"s\": 27150,\n \"text\": \"Scala | Decision Making (if, if-else, Nested if-else, if-else if)\"\n },\n {\n \"code\": null,\n \"e\": 27253,\n \"s\": 27216,\n \"text\": \"Scala List map() method with example\"\n },\n {\n \"code\": null,\n \"e\": 27271,\n \"s\": 27253,\n \"text\": \"Comments In Scala\"\n }\n]"}}},{"rowIdx":544,"cells":{"title":{"kind":"string","value":"Print all possible combinations of words from Dictionary using Trie - GeeksforGeeks"},"text":{"kind":"string","value":"13 Feb, 2020\nGiven an array of strings arr[], for every string in the array, print all possible combinations of strings that can be concatenated to make that word.\nExamples:\nInput: arr[] = [\"sam\", \"sung\", \"samsung\"]\nOutput:\nsam: \n sam\nsung: \n sung\nsamsung: \n sam sung\n samsung\nString 'samsung' can be formed using two different \nstrings from the array i.e. 'sam' and 'sung' whereas \n'samsung' itself is also a string in the array.\n\nInput: arr[] = [\"ice\", \"cream\", \"icecream\"]\nOutput:\nice: \n ice\ncream: \n cream\nicecream: \n ice cream\n icecream\n\nApproach:\nAdd all the given strings into trie.\nProcess every prefix character by character and check if it forms a word from trie by searching.\nIf the prefix is present in the trie then add it to the result and proceed further with the remaining suffix in the string.\nOnce it reaches the end of the string, print all the combinations found.\nBelow is the implementation of the above approach:\nCPP\nJava\nC#\n// C++ implementation of the approach #include using namespace std; const int ALPHABET_SIZE = 26; // Trie nodestruct TrieNode { struct TrieNode* children[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word bool isEndOfWord;}; // Returns new trie nodestruct TrieNode*getNode(void){ struct TrieNode* pNode = new TrieNode; pNode->isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode->children[i] = NULL; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodevoid insert(struct TrieNode* root, string key){ struct TrieNode* pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key[i] - 'a'; if (!pCrawl->children[index]) pCrawl->children[index] = getNode(); pCrawl = pCrawl->children[index]; } // Mark node as leaf pCrawl->isEndOfWord = true;} // Returns true if the key is present in the triebool search(struct TrieNode* root, string key){ struct TrieNode* pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key[i] - 'a'; if (!pCrawl->children[index]) return false; pCrawl = pCrawl->children[index]; } return (pCrawl != NULL && pCrawl->isEndOfWord);} // Result stores the current prefix with// spaces between wordsvoid wordBreakAll(TrieNode* root, string word, int n, string result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract substring from 0 to i in prefix string prefix = word.substr(0, i); // If trie conatins this prefix then check // for the remaining string. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word cout << \"\\t\" << result << endl; return; } wordBreakAll(root, word.substr(i, n - i), n - i, result + prefix + \" \"); } }} // Driver codeint main(){ struct TrieNode* root = getNode(); string dictionary[] = { \"sam\", \"sung\", \"samsung\" }; int n = sizeof(dictionary) / sizeof(string); for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { cout << dictionary[i] << \": \\n\"; wordBreakAll(root, dictionary[i], dictionary[i].length(), \"\"); } return 0;}\n// Java implementation of the approachclass GFG{ static int ALPHABET_SIZE = 26; // Trie nodestatic class TrieNode{ TrieNode []children = new TrieNode[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word boolean isEndOfWord; public TrieNode() { super(); } }; // Returns new trie nodestatic TrieNode getNode(){ TrieNode pNode = new TrieNode(); pNode.isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode.children[i] = null; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodestatic void insert(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key.charAt(i) - 'a'; if (pCrawl.children[index] == null) pCrawl.children[index] = getNode(); pCrawl = pCrawl.children[index]; } // Mark node as leaf pCrawl.isEndOfWord = true;} // Returns true if the key is present in the triestatic boolean search(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key.charAt(i) - 'a'; if (pCrawl.children[index] == null) return false; pCrawl = pCrawl.children[index]; } return (pCrawl != null && pCrawl.isEndOfWord);} // Result stores the current prefix with// spaces between wordsstatic void wordBreakAll(TrieNode root, String word, int n, String result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract subString from 0 to i in prefix String prefix = word.substring(0, i); // If trie conatins this prefix then check // for the remaining String. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word System.out.print(\"\\t\" + result +\"\\n\"); return; } wordBreakAll(root, word.substring(i, n), n - i, result + prefix + \" \"); } }} // Driver codepublic static void main(String[] args){ new TrieNode(); TrieNode root = getNode(); String dictionary[] = {\"sam\", \"sung\", \"samsung\"}; int n = dictionary.length; for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { System.out.print(dictionary[i]+ \": \\n\"); wordBreakAll(root, dictionary[i], dictionary[i].length(), \"\"); }}} // This code is contributed by PrinciRaj1992\n// C# implementation of the approachusing System; class GFG{ static int ALPHABET_SIZE = 26; // Trie nodeclass TrieNode{ public TrieNode []children = new TrieNode[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word public bool isEndOfWord; public TrieNode() { } }; // Returns new trie nodestatic TrieNode getNode(){ TrieNode pNode = new TrieNode(); pNode.isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode.children[i] = null; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodestatic void insert(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.Length; i++) { int index = key[i] - 'a'; if (pCrawl.children[index] == null) pCrawl.children[index] = getNode(); pCrawl = pCrawl.children[index]; } // Mark node as leaf pCrawl.isEndOfWord = true;} // Returns true if the key is present in the triestatic bool search(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.Length; i++) { int index = key[i] - 'a'; if (pCrawl.children[index] == null) return false; pCrawl = pCrawl.children[index]; } return (pCrawl != null && pCrawl.isEndOfWord);} // Result stores the current prefix with// spaces between wordsstatic void wordBreakAll(TrieNode root, String word, int n, String result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract subString from 0 to i in prefix String prefix = word.Substring(0, i); // If trie conatins this prefix then check // for the remaining String. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word Console.Write(\"\\t\" + result +\"\\n\"); return; } wordBreakAll(root, word.Substring(i, n - i), n - i, result + prefix + \" \"); } }} // Driver codepublic static void Main(String[] args){ new TrieNode(); TrieNode root = getNode(); String []dictionary = {\"sam\", \"sung\", \"samsung\"}; int n = dictionary.Length; for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { Console.Write(dictionary[i]+ \": \\n\"); wordBreakAll(root, dictionary[i], dictionary[i].Length, \"\"); }}} // This code is contributed by PrinciRaj1992\nsam: \n sam\nsung: \n sung\nsamsung: \n sam sung\n samsung\n\nprinciraj1992\nTechnical Scripter 2019\nTrie\nArrays\nBacktracking\nStrings\nTechnical Scripter\nArrays\nStrings\nBacktracking\nTrie\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nCount pairs with given sum\nChocolate Distribution Problem\nWindow Sliding Technique\nReversal algorithm for array rotation\nNext Greater Element\nN Queen Problem | Backtracking-3\nWrite a program to print all permutations of a given string\nBacktracking | Introduction\nRat in a Maze | Backtracking-2\nThe Knight's tour problem | Backtracking-1"},"parsed":{"kind":"list like","value":[{"code":null,"e":26065,"s":26037,"text":"\n13 Feb, 2020"},{"code":null,"e":26216,"s":26065,"text":"Given an array of strings arr[], for every string in the array, print all possible combinations of strings that can be concatenated to make that word."},{"code":null,"e":26226,"s":26216,"text":"Examples:"},{"code":null,"e":26619,"s":26226,"text":"Input: arr[] = [\"sam\", \"sung\", \"samsung\"]\nOutput:\nsam: \n sam\nsung: \n sung\nsamsung: \n sam sung\n samsung\nString 'samsung' can be formed using two different \nstrings from the array i.e. 'sam' and 'sung' whereas \n'samsung' itself is also a string in the array.\n\nInput: arr[] = [\"ice\", \"cream\", \"icecream\"]\nOutput:\nice: \n ice\ncream: \n cream\nicecream: \n ice cream\n icecream\n"},{"code":null,"e":26629,"s":26619,"text":"Approach:"},{"code":null,"e":26666,"s":26629,"text":"Add all the given strings into trie."},{"code":null,"e":26763,"s":26666,"text":"Process every prefix character by character and check if it forms a word from trie by searching."},{"code":null,"e":26887,"s":26763,"text":"If the prefix is present in the trie then add it to the result and proceed further with the remaining suffix in the string."},{"code":null,"e":26960,"s":26887,"text":"Once it reaches the end of the string, print all the combinations found."},{"code":null,"e":27011,"s":26960,"text":"Below is the implementation of the above approach:"},{"code":null,"e":27015,"s":27011,"text":"CPP"},{"code":null,"e":27020,"s":27015,"text":"Java"},{"code":null,"e":27023,"s":27020,"text":"C#"},{"code":"// C++ implementation of the approach #include using namespace std; const int ALPHABET_SIZE = 26; // Trie nodestruct TrieNode { struct TrieNode* children[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word bool isEndOfWord;}; // Returns new trie nodestruct TrieNode*getNode(void){ struct TrieNode* pNode = new TrieNode; pNode->isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode->children[i] = NULL; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodevoid insert(struct TrieNode* root, string key){ struct TrieNode* pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key[i] - 'a'; if (!pCrawl->children[index]) pCrawl->children[index] = getNode(); pCrawl = pCrawl->children[index]; } // Mark node as leaf pCrawl->isEndOfWord = true;} // Returns true if the key is present in the triebool search(struct TrieNode* root, string key){ struct TrieNode* pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key[i] - 'a'; if (!pCrawl->children[index]) return false; pCrawl = pCrawl->children[index]; } return (pCrawl != NULL && pCrawl->isEndOfWord);} // Result stores the current prefix with// spaces between wordsvoid wordBreakAll(TrieNode* root, string word, int n, string result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract substring from 0 to i in prefix string prefix = word.substr(0, i); // If trie conatins this prefix then check // for the remaining string. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word cout << \"\\t\" << result << endl; return; } wordBreakAll(root, word.substr(i, n - i), n - i, result + prefix + \" \"); } }} // Driver codeint main(){ struct TrieNode* root = getNode(); string dictionary[] = { \"sam\", \"sung\", \"samsung\" }; int n = sizeof(dictionary) / sizeof(string); for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { cout << dictionary[i] << \": \\n\"; wordBreakAll(root, dictionary[i], dictionary[i].length(), \"\"); } return 0;}","e":29792,"s":27023,"text":null},{"code":"// Java implementation of the approachclass GFG{ static int ALPHABET_SIZE = 26; // Trie nodestatic class TrieNode{ TrieNode []children = new TrieNode[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word boolean isEndOfWord; public TrieNode() { super(); } }; // Returns new trie nodestatic TrieNode getNode(){ TrieNode pNode = new TrieNode(); pNode.isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode.children[i] = null; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodestatic void insert(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key.charAt(i) - 'a'; if (pCrawl.children[index] == null) pCrawl.children[index] = getNode(); pCrawl = pCrawl.children[index]; } // Mark node as leaf pCrawl.isEndOfWord = true;} // Returns true if the key is present in the triestatic boolean search(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key.charAt(i) - 'a'; if (pCrawl.children[index] == null) return false; pCrawl = pCrawl.children[index]; } return (pCrawl != null && pCrawl.isEndOfWord);} // Result stores the current prefix with// spaces between wordsstatic void wordBreakAll(TrieNode root, String word, int n, String result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract subString from 0 to i in prefix String prefix = word.substring(0, i); // If trie conatins this prefix then check // for the remaining String. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word System.out.print(\"\\t\" + result +\"\\n\"); return; } wordBreakAll(root, word.substring(i, n), n - i, result + prefix + \" \"); } }} // Driver codepublic static void main(String[] args){ new TrieNode(); TrieNode root = getNode(); String dictionary[] = {\"sam\", \"sung\", \"samsung\"}; int n = dictionary.length; for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { System.out.print(dictionary[i]+ \": \\n\"); wordBreakAll(root, dictionary[i], dictionary[i].length(), \"\"); }}} // This code is contributed by PrinciRaj1992","e":32712,"s":29792,"text":null},{"code":"// C# implementation of the approachusing System; class GFG{ static int ALPHABET_SIZE = 26; // Trie nodeclass TrieNode{ public TrieNode []children = new TrieNode[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word public bool isEndOfWord; public TrieNode() { } }; // Returns new trie nodestatic TrieNode getNode(){ TrieNode pNode = new TrieNode(); pNode.isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode.children[i] = null; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodestatic void insert(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.Length; i++) { int index = key[i] - 'a'; if (pCrawl.children[index] == null) pCrawl.children[index] = getNode(); pCrawl = pCrawl.children[index]; } // Mark node as leaf pCrawl.isEndOfWord = true;} // Returns true if the key is present in the triestatic bool search(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.Length; i++) { int index = key[i] - 'a'; if (pCrawl.children[index] == null) return false; pCrawl = pCrawl.children[index]; } return (pCrawl != null && pCrawl.isEndOfWord);} // Result stores the current prefix with// spaces between wordsstatic void wordBreakAll(TrieNode root, String word, int n, String result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract subString from 0 to i in prefix String prefix = word.Substring(0, i); // If trie conatins this prefix then check // for the remaining String. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word Console.Write(\"\\t\" + result +\"\\n\"); return; } wordBreakAll(root, word.Substring(i, n - i), n - i, result + prefix + \" \"); } }} // Driver codepublic static void Main(String[] args){ new TrieNode(); TrieNode root = getNode(); String []dictionary = {\"sam\", \"sung\", \"samsung\"}; int n = dictionary.Length; for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { Console.Write(dictionary[i]+ \": \\n\"); wordBreakAll(root, dictionary[i], dictionary[i].Length, \"\"); }}} // This code is contributed by PrinciRaj1992","e":35608,"s":32712,"text":null},{"code":null,"e":35674,"s":35608,"text":"sam: \n sam\nsung: \n sung\nsamsung: \n sam sung\n samsung\n"},{"code":null,"e":35688,"s":35674,"text":"princiraj1992"},{"code":null,"e":35712,"s":35688,"text":"Technical Scripter 2019"},{"code":null,"e":35717,"s":35712,"text":"Trie"},{"code":null,"e":35724,"s":35717,"text":"Arrays"},{"code":null,"e":35737,"s":35724,"text":"Backtracking"},{"code":null,"e":35745,"s":35737,"text":"Strings"},{"code":null,"e":35764,"s":35745,"text":"Technical Scripter"},{"code":null,"e":35771,"s":35764,"text":"Arrays"},{"code":null,"e":35779,"s":35771,"text":"Strings"},{"code":null,"e":35792,"s":35779,"text":"Backtracking"},{"code":null,"e":35797,"s":35792,"text":"Trie"},{"code":null,"e":35895,"s":35797,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":35922,"s":35895,"text":"Count pairs with given sum"},{"code":null,"e":35953,"s":35922,"text":"Chocolate Distribution Problem"},{"code":null,"e":35978,"s":35953,"text":"Window Sliding Technique"},{"code":null,"e":36016,"s":35978,"text":"Reversal algorithm for array rotation"},{"code":null,"e":36037,"s":36016,"text":"Next Greater Element"},{"code":null,"e":36070,"s":36037,"text":"N Queen Problem | Backtracking-3"},{"code":null,"e":36130,"s":36070,"text":"Write a program to print all permutations of a given string"},{"code":null,"e":36158,"s":36130,"text":"Backtracking | Introduction"},{"code":null,"e":36189,"s":36158,"text":"Rat in a Maze | Backtracking-2"}],"string":"[\n {\n \"code\": null,\n \"e\": 26065,\n \"s\": 26037,\n \"text\": \"\\n13 Feb, 2020\"\n },\n {\n \"code\": null,\n \"e\": 26216,\n \"s\": 26065,\n \"text\": \"Given an array of strings arr[], for every string in the array, print all possible combinations of strings that can be concatenated to make that word.\"\n },\n {\n \"code\": null,\n \"e\": 26226,\n \"s\": 26216,\n \"text\": \"Examples:\"\n },\n {\n \"code\": null,\n \"e\": 26619,\n \"s\": 26226,\n \"text\": \"Input: arr[] = [\\\"sam\\\", \\\"sung\\\", \\\"samsung\\\"]\\nOutput:\\nsam: \\n sam\\nsung: \\n sung\\nsamsung: \\n sam sung\\n samsung\\nString 'samsung' can be formed using two different \\nstrings from the array i.e. 'sam' and 'sung' whereas \\n'samsung' itself is also a string in the array.\\n\\nInput: arr[] = [\\\"ice\\\", \\\"cream\\\", \\\"icecream\\\"]\\nOutput:\\nice: \\n ice\\ncream: \\n cream\\nicecream: \\n ice cream\\n icecream\\n\"\n },\n {\n \"code\": null,\n \"e\": 26629,\n \"s\": 26619,\n \"text\": \"Approach:\"\n },\n {\n \"code\": null,\n \"e\": 26666,\n \"s\": 26629,\n \"text\": \"Add all the given strings into trie.\"\n },\n {\n \"code\": null,\n \"e\": 26763,\n \"s\": 26666,\n \"text\": \"Process every prefix character by character and check if it forms a word from trie by searching.\"\n },\n {\n \"code\": null,\n \"e\": 26887,\n \"s\": 26763,\n \"text\": \"If the prefix is present in the trie then add it to the result and proceed further with the remaining suffix in the string.\"\n },\n {\n \"code\": null,\n \"e\": 26960,\n \"s\": 26887,\n \"text\": \"Once it reaches the end of the string, print all the combinations found.\"\n },\n {\n \"code\": null,\n \"e\": 27011,\n \"s\": 26960,\n \"text\": \"Below is the implementation of the above approach:\"\n },\n {\n \"code\": null,\n \"e\": 27015,\n \"s\": 27011,\n \"text\": \"CPP\"\n },\n {\n \"code\": null,\n \"e\": 27020,\n \"s\": 27015,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 27023,\n \"s\": 27020,\n \"text\": \"C#\"\n },\n {\n \"code\": \"// C++ implementation of the approach #include using namespace std; const int ALPHABET_SIZE = 26; // Trie nodestruct TrieNode { struct TrieNode* children[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word bool isEndOfWord;}; // Returns new trie nodestruct TrieNode*getNode(void){ struct TrieNode* pNode = new TrieNode; pNode->isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode->children[i] = NULL; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodevoid insert(struct TrieNode* root, string key){ struct TrieNode* pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key[i] - 'a'; if (!pCrawl->children[index]) pCrawl->children[index] = getNode(); pCrawl = pCrawl->children[index]; } // Mark node as leaf pCrawl->isEndOfWord = true;} // Returns true if the key is present in the triebool search(struct TrieNode* root, string key){ struct TrieNode* pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key[i] - 'a'; if (!pCrawl->children[index]) return false; pCrawl = pCrawl->children[index]; } return (pCrawl != NULL && pCrawl->isEndOfWord);} // Result stores the current prefix with// spaces between wordsvoid wordBreakAll(TrieNode* root, string word, int n, string result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract substring from 0 to i in prefix string prefix = word.substr(0, i); // If trie conatins this prefix then check // for the remaining string. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word cout << \\\"\\\\t\\\" << result << endl; return; } wordBreakAll(root, word.substr(i, n - i), n - i, result + prefix + \\\" \\\"); } }} // Driver codeint main(){ struct TrieNode* root = getNode(); string dictionary[] = { \\\"sam\\\", \\\"sung\\\", \\\"samsung\\\" }; int n = sizeof(dictionary) / sizeof(string); for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { cout << dictionary[i] << \\\": \\\\n\\\"; wordBreakAll(root, dictionary[i], dictionary[i].length(), \\\"\\\"); } return 0;}\",\n \"e\": 29792,\n \"s\": 27023,\n \"text\": null\n },\n {\n \"code\": \"// Java implementation of the approachclass GFG{ static int ALPHABET_SIZE = 26; // Trie nodestatic class TrieNode{ TrieNode []children = new TrieNode[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word boolean isEndOfWord; public TrieNode() { super(); } }; // Returns new trie nodestatic TrieNode getNode(){ TrieNode pNode = new TrieNode(); pNode.isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode.children[i] = null; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodestatic void insert(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key.charAt(i) - 'a'; if (pCrawl.children[index] == null) pCrawl.children[index] = getNode(); pCrawl = pCrawl.children[index]; } // Mark node as leaf pCrawl.isEndOfWord = true;} // Returns true if the key is present in the triestatic boolean search(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.length(); i++) { int index = key.charAt(i) - 'a'; if (pCrawl.children[index] == null) return false; pCrawl = pCrawl.children[index]; } return (pCrawl != null && pCrawl.isEndOfWord);} // Result stores the current prefix with// spaces between wordsstatic void wordBreakAll(TrieNode root, String word, int n, String result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract subString from 0 to i in prefix String prefix = word.substring(0, i); // If trie conatins this prefix then check // for the remaining String. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word System.out.print(\\\"\\\\t\\\" + result +\\\"\\\\n\\\"); return; } wordBreakAll(root, word.substring(i, n), n - i, result + prefix + \\\" \\\"); } }} // Driver codepublic static void main(String[] args){ new TrieNode(); TrieNode root = getNode(); String dictionary[] = {\\\"sam\\\", \\\"sung\\\", \\\"samsung\\\"}; int n = dictionary.length; for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { System.out.print(dictionary[i]+ \\\": \\\\n\\\"); wordBreakAll(root, dictionary[i], dictionary[i].length(), \\\"\\\"); }}} // This code is contributed by PrinciRaj1992\",\n \"e\": 32712,\n \"s\": 29792,\n \"text\": null\n },\n {\n \"code\": \"// C# implementation of the approachusing System; class GFG{ static int ALPHABET_SIZE = 26; // Trie nodeclass TrieNode{ public TrieNode []children = new TrieNode[ALPHABET_SIZE]; // isEndOfWord is true if node // represents the end of the word public bool isEndOfWord; public TrieNode() { } }; // Returns new trie nodestatic TrieNode getNode(){ TrieNode pNode = new TrieNode(); pNode.isEndOfWord = false; for (int i = 0; i < ALPHABET_SIZE; i++) pNode.children[i] = null; return pNode;} // If not present, inserts key into trie// If the key is prefix of trie node,// marks the node as leaf nodestatic void insert(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.Length; i++) { int index = key[i] - 'a'; if (pCrawl.children[index] == null) pCrawl.children[index] = getNode(); pCrawl = pCrawl.children[index]; } // Mark node as leaf pCrawl.isEndOfWord = true;} // Returns true if the key is present in the triestatic bool search(TrieNode root, String key){ TrieNode pCrawl = root; for (int i = 0; i < key.Length; i++) { int index = key[i] - 'a'; if (pCrawl.children[index] == null) return false; pCrawl = pCrawl.children[index]; } return (pCrawl != null && pCrawl.isEndOfWord);} // Result stores the current prefix with// spaces between wordsstatic void wordBreakAll(TrieNode root, String word, int n, String result){ // Process all prefixes one by one for (int i = 1; i <= n; i++) { // Extract subString from 0 to i in prefix String prefix = word.Substring(0, i); // If trie conatins this prefix then check // for the remaining String. // Otherwise ignore this prefix if (search(root, prefix)) { // If no more elements are there then print if (i == n) { // Add this element to the previous prefix result += prefix; // If(result == word) then return // If you don't want to print last word Console.Write(\\\"\\\\t\\\" + result +\\\"\\\\n\\\"); return; } wordBreakAll(root, word.Substring(i, n - i), n - i, result + prefix + \\\" \\\"); } }} // Driver codepublic static void Main(String[] args){ new TrieNode(); TrieNode root = getNode(); String []dictionary = {\\\"sam\\\", \\\"sung\\\", \\\"samsung\\\"}; int n = dictionary.Length; for (int i = 0; i < n; i++) { insert(root, dictionary[i]); } for (int i = 0; i < n; i++) { Console.Write(dictionary[i]+ \\\": \\\\n\\\"); wordBreakAll(root, dictionary[i], dictionary[i].Length, \\\"\\\"); }}} // This code is contributed by PrinciRaj1992\",\n \"e\": 35608,\n \"s\": 32712,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 35674,\n \"s\": 35608,\n \"text\": \"sam: \\n sam\\nsung: \\n sung\\nsamsung: \\n sam sung\\n samsung\\n\"\n },\n {\n \"code\": null,\n \"e\": 35688,\n \"s\": 35674,\n \"text\": \"princiraj1992\"\n },\n {\n \"code\": null,\n \"e\": 35712,\n \"s\": 35688,\n \"text\": \"Technical Scripter 2019\"\n },\n {\n \"code\": null,\n \"e\": 35717,\n \"s\": 35712,\n \"text\": \"Trie\"\n },\n {\n \"code\": null,\n \"e\": 35724,\n \"s\": 35717,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 35737,\n \"s\": 35724,\n \"text\": \"Backtracking\"\n },\n {\n \"code\": null,\n \"e\": 35745,\n \"s\": 35737,\n \"text\": \"Strings\"\n },\n {\n \"code\": null,\n \"e\": 35764,\n \"s\": 35745,\n \"text\": \"Technical Scripter\"\n },\n {\n \"code\": null,\n \"e\": 35771,\n \"s\": 35764,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 35779,\n \"s\": 35771,\n \"text\": \"Strings\"\n },\n {\n \"code\": null,\n \"e\": 35792,\n \"s\": 35779,\n \"text\": \"Backtracking\"\n },\n {\n \"code\": null,\n \"e\": 35797,\n \"s\": 35792,\n \"text\": \"Trie\"\n },\n {\n \"code\": null,\n \"e\": 35895,\n \"s\": 35797,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 35922,\n \"s\": 35895,\n \"text\": \"Count pairs with given sum\"\n },\n {\n \"code\": null,\n \"e\": 35953,\n \"s\": 35922,\n \"text\": \"Chocolate Distribution Problem\"\n },\n {\n \"code\": null,\n \"e\": 35978,\n \"s\": 35953,\n \"text\": \"Window Sliding Technique\"\n },\n {\n \"code\": null,\n \"e\": 36016,\n \"s\": 35978,\n \"text\": \"Reversal algorithm for array rotation\"\n },\n {\n \"code\": null,\n \"e\": 36037,\n \"s\": 36016,\n \"text\": \"Next Greater Element\"\n },\n {\n \"code\": null,\n \"e\": 36070,\n \"s\": 36037,\n \"text\": \"N Queen Problem | Backtracking-3\"\n },\n {\n \"code\": null,\n \"e\": 36130,\n \"s\": 36070,\n \"text\": \"Write a program to print all permutations of a given string\"\n },\n {\n \"code\": null,\n \"e\": 36158,\n \"s\": 36130,\n \"text\": \"Backtracking | Introduction\"\n },\n {\n \"code\": null,\n \"e\": 36189,\n \"s\": 36158,\n \"text\": \"Rat in a Maze | Backtracking-2\"\n }\n]"}}},{"rowIdx":545,"cells":{"title":{"kind":"string","value":"How to sort an array in a single loop? - GeeksforGeeks"},"text":{"kind":"string","value":"11 Jun, 2021\nGiven an array of size N, the task is to sort this array using a single loop.How the array is sorted usually? There are many ways by which the array can be sorted in ascending order, like: \nSelection Sort\nBinary Sort\nMerge Sort\nRadix Sort\nInsertion Sort, etc\nIn any of these methods, more than 1 loops is used.Can the array the sorted using a single loop? Since all the known sorting methods use more than 1 loop, it is hard to imagine to do the same with a single loop. Practically, it is not impossible to do so. But doing so won’t be the most efficient. Example 1: Below code will sort an array with integer elements. \nC++\nJava\nPython3\nC#\nJavascript\n// C++ code to sort an array of integers// with the help of single loop#includeusing namespace std; // Function for Sorting the array// using a single loopint *sortArrays(int arr[], int length){ // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr;} // Driver codeint main(){ // Declaring an integer array of size 11. int arr[] = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. int length = sizeof(arr)/sizeof(arr[0]); string str; for (int i: arr) { str += to_string(i)+\" \"; } cout<<\"Original array: [\" << str << \"]\" << endl; // Sorting the array using a single loop int *arr1; arr1 = sortArrays(arr, length); // Printing the sorted array. string str1; for (int i = 0; i < length; i++) { str1 += to_string(arr1[i])+\" \"; } cout << \"Sorted array: [\" << (str1) << \"]\";} // This code is contributed by Rajout-Ji\n// Java code to sort an array of integers// with the help of single loop import java.util.*; class Geeks_For_Geeks { // Function for Sorting the array // using a single loop public static int[] sortArrays(int[] arr) { // Finding the length of array 'arr' int length = arr.length; // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr; } // Declaring main method public static void main(String args[]) { // Declaring an integer array of size 11. int arr[] = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. System.out.println(\"Original array: \" + Arrays.toString(arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. System.out.println(\"Sorted array: \" + Arrays.toString(arr)); }}\n# Python3 code to sort an array of integers# with the help of single loop # Function for Sorting the array# using a single loopdef sortArrays(arr): # Finding the length of array 'arr' length = len(arr) # Sorting using a single loop j = 0 while j < length - 1: # Checking the condition for two # simultaneous elements of the array if (arr[j] > arr[j + 1]): # Swapping the elements. temp = arr[j] arr[j] = arr[j + 1] arr[j + 1] = temp # updating the value of j = -1 # so after getting updated for j++ # in the loop it becomes 0 and # the loop begins from the start. j = -1 j += 1 return arr # Driver Codeif __name__ == '__main__': # Declaring an integer array of size 11. arr = [1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3] # Printing the original Array. print(\"Original array: \", arr) # Sorting the array using a single loop arr = sortArrays(arr) # Printing the sorted array. print(\"Sorted array: \", arr) # This code is contributed by Mohit Kumar\n// C# code to sort an array of integers// with the help of single loopusing System; class GFG{ // Function for Sorting the array // using a single loop public static int[] sortArrays(int[] arr) { // Finding the length of array 'arr' int length = arr.Length; // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr; } // Driver Code public static void Main(String []args) { // Declaring an integer array of size 11. int []arr = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. Console.WriteLine(\"Original array: \" + String.Join(\", \", arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. Console.WriteLine(\"Sorted array: \" + String.Join(\", \", arr)); }} // This code is contributed by Rajput-Ji\n\nOriginal array: [1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3]\nSorted array: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 99]\n \nExample 2: Below code will sort an array of Strings. \nC++\nJava\nPython3\nC#\nJavascript\n// C++ code to sort an array of Strings// with the help of single loop#includeusing namespace std; // Function for Sorting the array using a single loopchar* sortArrays(char arr[], int length){ // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr;} // Driver codeint main(){ // Declaring a String string geeks = \"GEEKSFORGEEKS\"; int n = geeks.length(); // declaring character array char arr[n]; // copying the contents of the // string to char array for(int i = 0; i < n; i++) { arr[i] = geeks[i]; } // Printing the original Array. cout<<\"Original array: [\"; for(int i = 0; i < n; i++) { cout << arr[i]; if(i + 1 != n) cout<<\", \"; } cout << \"]\" << endl; // Sorting the array using a single loop char *ansarr; ansarr = sortArrays(arr, n); // Printing the sorted array. cout << \"Sorted array: [\"; for(int i = 0; i < n; i++) { cout << ansarr[i]; if(i + 1 != n) cout << \", \"; } cout << \"]\" << endl;} // This code is contributed by Rajput-Ji\n// Java code to sort an array of Strings// with the help of single loop import java.util.*; class Geeks_For_Geeks { // Function for Sorting the array using a single loop public static char[] sortArrays(char[] arr) { // Finding the length of array 'arr' int length = arr.length; // Sorting using a single loop for (int j = 0; j < arr.length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr; } // Declaring main method public static void main(String args[]) { // Declaring a String String geeks = \"GEEKSFORGEEKS\"; // Declaring a character array // to store characters of geeks in it. char arr[] = geeks.toCharArray(); // Printing the original Array. System.out.println(\"Original array: \" + Arrays.toString(arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. System.out.println(\"Sorted array: \" + Arrays.toString(arr)); }}\n# Python3 code to sort an array of Strings# with the help of single loop # Function for Sorting the array using a single loopdef sortArrays(arr, length): # Sorting using a single loop j = 0 while(j < length - 1): # Type Conversion of char to int. d1 = arr[j] d2 = arr[j + 1] # Comparing the ascii code. if (d1 > d2): # Swapping of the characters temp = arr[j] arr[j] = arr[j + 1] arr[j + 1] = temp j = -1 j += 1 return arr # Driver code # Declaring a Stringgeeks = \"GEEKSFORGEEKS\"n = len(geeks) # declaring character arrayarr=[0]*n # copying the contents of the# string to char arrayfor i in range(n): arr[i] = geeks[i] # Printing the original Array.print(\"Original array: [\",end=\"\") for i in range(n): print(arr[i],end=\"\") if (i + 1 != n): print(\", \",end=\"\") print(\"]\") # Sorting the array using a single loopansarr = sortArrays(arr, n) # Printing the sorted array.print(\"Sorted array: [\",end=\"\") for i in range(n): print(ansarr[i],end=\"\") if (i + 1 != n): print(\", \",end=\"\") print(\"]\") # This code is contributed by shubhamsingh10\n// C# code to sort an array of Strings// with the help of single loopusing System; class GFG{ // Function for Sorting the array // using a single loop public static char[] sortArrays(char[] arr) { // Finding the length of array 'arr' int length = arr.Length; // Sorting using a single loop for (int j = 0; j < arr.Length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr; } // Declaring main method public static void Main(String []args) { // Declaring a String String geeks = \"GEEKSFORGEEKS\"; // Declaring a character array // to store characters of geeks in it. char []arr = geeks.ToCharArray(); // Printing the original Array. Console.WriteLine(\"Original array: [\" + String.Join(\", \", arr) + \"]\"); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. Console.WriteLine(\"Sorted array: [\" + String.Join(\", \", arr) + \"]\"); }} // This code is contributed by PrinciRaj1992\n\nOriginal array: [G, E, E, K, S, F, O, R, G, E, E, K, S]\nSorted array: [E, E, E, E, F, G, G, K, K, O, R, S, S]\n \nIs sorting array in single loop better than sorting in more than one loop? Sorting in a single loop, though it seems to be better, is not an efficient approach. Below are some points to be taken into consideration before using single loop sorting: \nUsing a single loop only helps in shorter code\nThe time complexity of the sorting does not change in a single loop (in comparison to more than one loop sorting)\nSingle loop sorting shows that number of loops has little to do with time complexity of the algorithm.\n \nmohit kumar 29\nRajput-Ji\nprinciraj1992\nSHUBHAMSINGH10\npatel2127\nshivanisinghss2110\nSorting\nTechnical Scripter\nSorting\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nChocolate Distribution Problem\nC++ Program for QuickSort\nStability in sorting algorithms\nQuick Sort vs Merge Sort\nSorting in Java\nQuickselect Algorithm\nRecursive Bubble Sort\nCheck if two arrays are equal or not\nLongest Common Prefix using Sorting\nPython | Sort a List according to the Length of the Elements"},"parsed":{"kind":"list like","value":[{"code":null,"e":25323,"s":25295,"text":"\n11 Jun, 2021"},{"code":null,"e":25514,"s":25323,"text":"Given an array of size N, the task is to sort this array using a single loop.How the array is sorted usually? There are many ways by which the array can be sorted in ascending order, like: "},{"code":null,"e":25529,"s":25514,"text":"Selection Sort"},{"code":null,"e":25541,"s":25529,"text":"Binary Sort"},{"code":null,"e":25552,"s":25541,"text":"Merge Sort"},{"code":null,"e":25563,"s":25552,"text":"Radix Sort"},{"code":null,"e":25583,"s":25563,"text":"Insertion Sort, etc"},{"code":null,"e":25946,"s":25583,"text":"In any of these methods, more than 1 loops is used.Can the array the sorted using a single loop? Since all the known sorting methods use more than 1 loop, it is hard to imagine to do the same with a single loop. Practically, it is not impossible to do so. But doing so won’t be the most efficient. Example 1: Below code will sort an array with integer elements. "},{"code":null,"e":25950,"s":25946,"text":"C++"},{"code":null,"e":25955,"s":25950,"text":"Java"},{"code":null,"e":25963,"s":25955,"text":"Python3"},{"code":null,"e":25966,"s":25963,"text":"C#"},{"code":null,"e":25977,"s":25966,"text":"Javascript"},{"code":"// C++ code to sort an array of integers// with the help of single loop#includeusing namespace std; // Function for Sorting the array// using a single loopint *sortArrays(int arr[], int length){ // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr;} // Driver codeint main(){ // Declaring an integer array of size 11. int arr[] = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. int length = sizeof(arr)/sizeof(arr[0]); string str; for (int i: arr) { str += to_string(i)+\" \"; } cout<<\"Original array: [\" << str << \"]\" << endl; // Sorting the array using a single loop int *arr1; arr1 = sortArrays(arr, length); // Printing the sorted array. string str1; for (int i = 0; i < length; i++) { str1 += to_string(arr1[i])+\" \"; } cout << \"Sorted array: [\" << (str1) << \"]\";} // This code is contributed by Rajout-Ji","e":27496,"s":25977,"text":null},{"code":"// Java code to sort an array of integers// with the help of single loop import java.util.*; class Geeks_For_Geeks { // Function for Sorting the array // using a single loop public static int[] sortArrays(int[] arr) { // Finding the length of array 'arr' int length = arr.length; // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr; } // Declaring main method public static void main(String args[]) { // Declaring an integer array of size 11. int arr[] = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. System.out.println(\"Original array: \" + Arrays.toString(arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. System.out.println(\"Sorted array: \" + Arrays.toString(arr)); }}","e":28990,"s":27496,"text":null},{"code":"# Python3 code to sort an array of integers# with the help of single loop # Function for Sorting the array# using a single loopdef sortArrays(arr): # Finding the length of array 'arr' length = len(arr) # Sorting using a single loop j = 0 while j < length - 1: # Checking the condition for two # simultaneous elements of the array if (arr[j] > arr[j + 1]): # Swapping the elements. temp = arr[j] arr[j] = arr[j + 1] arr[j + 1] = temp # updating the value of j = -1 # so after getting updated for j++ # in the loop it becomes 0 and # the loop begins from the start. j = -1 j += 1 return arr # Driver Codeif __name__ == '__main__': # Declaring an integer array of size 11. arr = [1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3] # Printing the original Array. print(\"Original array: \", arr) # Sorting the array using a single loop arr = sortArrays(arr) # Printing the sorted array. print(\"Sorted array: \", arr) # This code is contributed by Mohit Kumar","e":30119,"s":28990,"text":null},{"code":"// C# code to sort an array of integers// with the help of single loopusing System; class GFG{ // Function for Sorting the array // using a single loop public static int[] sortArrays(int[] arr) { // Finding the length of array 'arr' int length = arr.Length; // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr; } // Driver Code public static void Main(String []args) { // Declaring an integer array of size 11. int []arr = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. Console.WriteLine(\"Original array: \" + String.Join(\", \", arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. Console.WriteLine(\"Sorted array: \" + String.Join(\", \", arr)); }} // This code is contributed by Rajput-Ji","e":31639,"s":30119,"text":null},{"code":"","e":32917,"s":31639,"text":null},{"code":null,"e":33017,"s":32917,"text":"Original array: [1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3]\nSorted array: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 99]"},{"code":null,"e":33074,"s":33019,"text":"Example 2: Below code will sort an array of Strings. "},{"code":null,"e":33078,"s":33074,"text":"C++"},{"code":null,"e":33083,"s":33078,"text":"Java"},{"code":null,"e":33091,"s":33083,"text":"Python3"},{"code":null,"e":33094,"s":33091,"text":"C#"},{"code":null,"e":33105,"s":33094,"text":"Javascript"},{"code":"// C++ code to sort an array of Strings// with the help of single loop#includeusing namespace std; // Function for Sorting the array using a single loopchar* sortArrays(char arr[], int length){ // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr;} // Driver codeint main(){ // Declaring a String string geeks = \"GEEKSFORGEEKS\"; int n = geeks.length(); // declaring character array char arr[n]; // copying the contents of the // string to char array for(int i = 0; i < n; i++) { arr[i] = geeks[i]; } // Printing the original Array. cout<<\"Original array: [\"; for(int i = 0; i < n; i++) { cout << arr[i]; if(i + 1 != n) cout<<\", \"; } cout << \"]\" << endl; // Sorting the array using a single loop char *ansarr; ansarr = sortArrays(arr, n); // Printing the sorted array. cout << \"Sorted array: [\"; for(int i = 0; i < n; i++) { cout << ansarr[i]; if(i + 1 != n) cout << \", \"; } cout << \"]\" << endl;} // This code is contributed by Rajput-Ji","e":34608,"s":33105,"text":null},{"code":"// Java code to sort an array of Strings// with the help of single loop import java.util.*; class Geeks_For_Geeks { // Function for Sorting the array using a single loop public static char[] sortArrays(char[] arr) { // Finding the length of array 'arr' int length = arr.length; // Sorting using a single loop for (int j = 0; j < arr.length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr; } // Declaring main method public static void main(String args[]) { // Declaring a String String geeks = \"GEEKSFORGEEKS\"; // Declaring a character array // to store characters of geeks in it. char arr[] = geeks.toCharArray(); // Printing the original Array. System.out.println(\"Original array: \" + Arrays.toString(arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. System.out.println(\"Sorted array: \" + Arrays.toString(arr)); }}","e":36019,"s":34608,"text":null},{"code":"# Python3 code to sort an array of Strings# with the help of single loop # Function for Sorting the array using a single loopdef sortArrays(arr, length): # Sorting using a single loop j = 0 while(j < length - 1): # Type Conversion of char to int. d1 = arr[j] d2 = arr[j + 1] # Comparing the ascii code. if (d1 > d2): # Swapping of the characters temp = arr[j] arr[j] = arr[j + 1] arr[j + 1] = temp j = -1 j += 1 return arr # Driver code # Declaring a Stringgeeks = \"GEEKSFORGEEKS\"n = len(geeks) # declaring character arrayarr=[0]*n # copying the contents of the# string to char arrayfor i in range(n): arr[i] = geeks[i] # Printing the original Array.print(\"Original array: [\",end=\"\") for i in range(n): print(arr[i],end=\"\") if (i + 1 != n): print(\", \",end=\"\") print(\"]\") # Sorting the array using a single loopansarr = sortArrays(arr, n) # Printing the sorted array.print(\"Sorted array: [\",end=\"\") for i in range(n): print(ansarr[i],end=\"\") if (i + 1 != n): print(\", \",end=\"\") print(\"]\") # This code is contributed by shubhamsingh10","e":37242,"s":36019,"text":null},{"code":"// C# code to sort an array of Strings// with the help of single loopusing System; class GFG{ // Function for Sorting the array // using a single loop public static char[] sortArrays(char[] arr) { // Finding the length of array 'arr' int length = arr.Length; // Sorting using a single loop for (int j = 0; j < arr.Length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr; } // Declaring main method public static void Main(String []args) { // Declaring a String String geeks = \"GEEKSFORGEEKS\"; // Declaring a character array // to store characters of geeks in it. char []arr = geeks.ToCharArray(); // Printing the original Array. Console.WriteLine(\"Original array: [\" + String.Join(\", \", arr) + \"]\"); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. Console.WriteLine(\"Sorted array: [\" + String.Join(\", \", arr) + \"]\"); }} // This code is contributed by PrinciRaj1992","e":38715,"s":37242,"text":null},{"code":"","e":39984,"s":38715,"text":null},{"code":null,"e":40094,"s":39984,"text":"Original array: [G, E, E, K, S, F, O, R, G, E, E, K, S]\nSorted array: [E, E, E, E, F, G, G, K, K, O, R, S, S]"},{"code":null,"e":40346,"s":40096,"text":"Is sorting array in single loop better than sorting in more than one loop? Sorting in a single loop, though it seems to be better, is not an efficient approach. Below are some points to be taken into consideration before using single loop sorting: "},{"code":null,"e":40393,"s":40346,"text":"Using a single loop only helps in shorter code"},{"code":null,"e":40507,"s":40393,"text":"The time complexity of the sorting does not change in a single loop (in comparison to more than one loop sorting)"},{"code":null,"e":40610,"s":40507,"text":"Single loop sorting shows that number of loops has little to do with time complexity of the algorithm."},{"code":null,"e":40627,"s":40612,"text":"mohit kumar 29"},{"code":null,"e":40637,"s":40627,"text":"Rajput-Ji"},{"code":null,"e":40651,"s":40637,"text":"princiraj1992"},{"code":null,"e":40666,"s":40651,"text":"SHUBHAMSINGH10"},{"code":null,"e":40676,"s":40666,"text":"patel2127"},{"code":null,"e":40695,"s":40676,"text":"shivanisinghss2110"},{"code":null,"e":40703,"s":40695,"text":"Sorting"},{"code":null,"e":40722,"s":40703,"text":"Technical Scripter"},{"code":null,"e":40730,"s":40722,"text":"Sorting"},{"code":null,"e":40828,"s":40730,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":40859,"s":40828,"text":"Chocolate Distribution Problem"},{"code":null,"e":40885,"s":40859,"text":"C++ Program for QuickSort"},{"code":null,"e":40917,"s":40885,"text":"Stability in sorting algorithms"},{"code":null,"e":40942,"s":40917,"text":"Quick Sort vs Merge Sort"},{"code":null,"e":40958,"s":40942,"text":"Sorting in Java"},{"code":null,"e":40980,"s":40958,"text":"Quickselect Algorithm"},{"code":null,"e":41002,"s":40980,"text":"Recursive Bubble Sort"},{"code":null,"e":41039,"s":41002,"text":"Check if two arrays are equal or not"},{"code":null,"e":41075,"s":41039,"text":"Longest Common Prefix using Sorting"}],"string":"[\n {\n \"code\": null,\n \"e\": 25323,\n \"s\": 25295,\n \"text\": \"\\n11 Jun, 2021\"\n },\n {\n \"code\": null,\n \"e\": 25514,\n \"s\": 25323,\n \"text\": \"Given an array of size N, the task is to sort this array using a single loop.How the array is sorted usually? There are many ways by which the array can be sorted in ascending order, like: \"\n },\n {\n \"code\": null,\n \"e\": 25529,\n \"s\": 25514,\n \"text\": \"Selection Sort\"\n },\n {\n \"code\": null,\n \"e\": 25541,\n \"s\": 25529,\n \"text\": \"Binary Sort\"\n },\n {\n \"code\": null,\n \"e\": 25552,\n \"s\": 25541,\n \"text\": \"Merge Sort\"\n },\n {\n \"code\": null,\n \"e\": 25563,\n \"s\": 25552,\n \"text\": \"Radix Sort\"\n },\n {\n \"code\": null,\n \"e\": 25583,\n \"s\": 25563,\n \"text\": \"Insertion Sort, etc\"\n },\n {\n \"code\": null,\n \"e\": 25946,\n \"s\": 25583,\n \"text\": \"In any of these methods, more than 1 loops is used.Can the array the sorted using a single loop? Since all the known sorting methods use more than 1 loop, it is hard to imagine to do the same with a single loop. Practically, it is not impossible to do so. But doing so won’t be the most efficient. Example 1: Below code will sort an array with integer elements. \"\n },\n {\n \"code\": null,\n \"e\": 25950,\n \"s\": 25946,\n \"text\": \"C++\"\n },\n {\n \"code\": null,\n \"e\": 25955,\n \"s\": 25950,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 25963,\n \"s\": 25955,\n \"text\": \"Python3\"\n },\n {\n \"code\": null,\n \"e\": 25966,\n \"s\": 25963,\n \"text\": \"C#\"\n },\n {\n \"code\": null,\n \"e\": 25977,\n \"s\": 25966,\n \"text\": \"Javascript\"\n },\n {\n \"code\": \"// C++ code to sort an array of integers// with the help of single loop#includeusing namespace std; // Function for Sorting the array// using a single loopint *sortArrays(int arr[], int length){ // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr;} // Driver codeint main(){ // Declaring an integer array of size 11. int arr[] = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. int length = sizeof(arr)/sizeof(arr[0]); string str; for (int i: arr) { str += to_string(i)+\\\" \\\"; } cout<<\\\"Original array: [\\\" << str << \\\"]\\\" << endl; // Sorting the array using a single loop int *arr1; arr1 = sortArrays(arr, length); // Printing the sorted array. string str1; for (int i = 0; i < length; i++) { str1 += to_string(arr1[i])+\\\" \\\"; } cout << \\\"Sorted array: [\\\" << (str1) << \\\"]\\\";} // This code is contributed by Rajout-Ji\",\n \"e\": 27496,\n \"s\": 25977,\n \"text\": null\n },\n {\n \"code\": \"// Java code to sort an array of integers// with the help of single loop import java.util.*; class Geeks_For_Geeks { // Function for Sorting the array // using a single loop public static int[] sortArrays(int[] arr) { // Finding the length of array 'arr' int length = arr.length; // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr; } // Declaring main method public static void main(String args[]) { // Declaring an integer array of size 11. int arr[] = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. System.out.println(\\\"Original array: \\\" + Arrays.toString(arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. System.out.println(\\\"Sorted array: \\\" + Arrays.toString(arr)); }}\",\n \"e\": 28990,\n \"s\": 27496,\n \"text\": null\n },\n {\n \"code\": \"# Python3 code to sort an array of integers# with the help of single loop # Function for Sorting the array# using a single loopdef sortArrays(arr): # Finding the length of array 'arr' length = len(arr) # Sorting using a single loop j = 0 while j < length - 1: # Checking the condition for two # simultaneous elements of the array if (arr[j] > arr[j + 1]): # Swapping the elements. temp = arr[j] arr[j] = arr[j + 1] arr[j + 1] = temp # updating the value of j = -1 # so after getting updated for j++ # in the loop it becomes 0 and # the loop begins from the start. j = -1 j += 1 return arr # Driver Codeif __name__ == '__main__': # Declaring an integer array of size 11. arr = [1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3] # Printing the original Array. print(\\\"Original array: \\\", arr) # Sorting the array using a single loop arr = sortArrays(arr) # Printing the sorted array. print(\\\"Sorted array: \\\", arr) # This code is contributed by Mohit Kumar\",\n \"e\": 30119,\n \"s\": 28990,\n \"text\": null\n },\n {\n \"code\": \"// C# code to sort an array of integers// with the help of single loopusing System; class GFG{ // Function for Sorting the array // using a single loop public static int[] sortArrays(int[] arr) { // Finding the length of array 'arr' int length = arr.Length; // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Checking the condition for two // simultaneous elements of the array if (arr[j] > arr[j + 1]) { // Swapping the elements. int temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; // updating the value of j = -1 // so after getting updated for j++ // in the loop it becomes 0 and // the loop begins from the start. j = -1; } } return arr; } // Driver Code public static void Main(String []args) { // Declaring an integer array of size 11. int []arr = { 1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3 }; // Printing the original Array. Console.WriteLine(\\\"Original array: \\\" + String.Join(\\\", \\\", arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. Console.WriteLine(\\\"Sorted array: \\\" + String.Join(\\\", \\\", arr)); }} // This code is contributed by Rajput-Ji\",\n \"e\": 31639,\n \"s\": 30119,\n \"text\": null\n },\n {\n \"code\": \"\",\n \"e\": 32917,\n \"s\": 31639,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 33017,\n \"s\": 32917,\n \"text\": \"Original array: [1, 2, 99, 9, 8, 7, 6, 0, 5, 4, 3]\\nSorted array: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 99]\"\n },\n {\n \"code\": null,\n \"e\": 33074,\n \"s\": 33019,\n \"text\": \"Example 2: Below code will sort an array of Strings. \"\n },\n {\n \"code\": null,\n \"e\": 33078,\n \"s\": 33074,\n \"text\": \"C++\"\n },\n {\n \"code\": null,\n \"e\": 33083,\n \"s\": 33078,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 33091,\n \"s\": 33083,\n \"text\": \"Python3\"\n },\n {\n \"code\": null,\n \"e\": 33094,\n \"s\": 33091,\n \"text\": \"C#\"\n },\n {\n \"code\": null,\n \"e\": 33105,\n \"s\": 33094,\n \"text\": \"Javascript\"\n },\n {\n \"code\": \"// C++ code to sort an array of Strings// with the help of single loop#includeusing namespace std; // Function for Sorting the array using a single loopchar* sortArrays(char arr[], int length){ // Sorting using a single loop for (int j = 0; j < length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr;} // Driver codeint main(){ // Declaring a String string geeks = \\\"GEEKSFORGEEKS\\\"; int n = geeks.length(); // declaring character array char arr[n]; // copying the contents of the // string to char array for(int i = 0; i < n; i++) { arr[i] = geeks[i]; } // Printing the original Array. cout<<\\\"Original array: [\\\"; for(int i = 0; i < n; i++) { cout << arr[i]; if(i + 1 != n) cout<<\\\", \\\"; } cout << \\\"]\\\" << endl; // Sorting the array using a single loop char *ansarr; ansarr = sortArrays(arr, n); // Printing the sorted array. cout << \\\"Sorted array: [\\\"; for(int i = 0; i < n; i++) { cout << ansarr[i]; if(i + 1 != n) cout << \\\", \\\"; } cout << \\\"]\\\" << endl;} // This code is contributed by Rajput-Ji\",\n \"e\": 34608,\n \"s\": 33105,\n \"text\": null\n },\n {\n \"code\": \"// Java code to sort an array of Strings// with the help of single loop import java.util.*; class Geeks_For_Geeks { // Function for Sorting the array using a single loop public static char[] sortArrays(char[] arr) { // Finding the length of array 'arr' int length = arr.length; // Sorting using a single loop for (int j = 0; j < arr.length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr; } // Declaring main method public static void main(String args[]) { // Declaring a String String geeks = \\\"GEEKSFORGEEKS\\\"; // Declaring a character array // to store characters of geeks in it. char arr[] = geeks.toCharArray(); // Printing the original Array. System.out.println(\\\"Original array: \\\" + Arrays.toString(arr)); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. System.out.println(\\\"Sorted array: \\\" + Arrays.toString(arr)); }}\",\n \"e\": 36019,\n \"s\": 34608,\n \"text\": null\n },\n {\n \"code\": \"# Python3 code to sort an array of Strings# with the help of single loop # Function for Sorting the array using a single loopdef sortArrays(arr, length): # Sorting using a single loop j = 0 while(j < length - 1): # Type Conversion of char to int. d1 = arr[j] d2 = arr[j + 1] # Comparing the ascii code. if (d1 > d2): # Swapping of the characters temp = arr[j] arr[j] = arr[j + 1] arr[j + 1] = temp j = -1 j += 1 return arr # Driver code # Declaring a Stringgeeks = \\\"GEEKSFORGEEKS\\\"n = len(geeks) # declaring character arrayarr=[0]*n # copying the contents of the# string to char arrayfor i in range(n): arr[i] = geeks[i] # Printing the original Array.print(\\\"Original array: [\\\",end=\\\"\\\") for i in range(n): print(arr[i],end=\\\"\\\") if (i + 1 != n): print(\\\", \\\",end=\\\"\\\") print(\\\"]\\\") # Sorting the array using a single loopansarr = sortArrays(arr, n) # Printing the sorted array.print(\\\"Sorted array: [\\\",end=\\\"\\\") for i in range(n): print(ansarr[i],end=\\\"\\\") if (i + 1 != n): print(\\\", \\\",end=\\\"\\\") print(\\\"]\\\") # This code is contributed by shubhamsingh10\",\n \"e\": 37242,\n \"s\": 36019,\n \"text\": null\n },\n {\n \"code\": \"// C# code to sort an array of Strings// with the help of single loopusing System; class GFG{ // Function for Sorting the array // using a single loop public static char[] sortArrays(char[] arr) { // Finding the length of array 'arr' int length = arr.Length; // Sorting using a single loop for (int j = 0; j < arr.Length - 1; j++) { // Type Conversion of char to int. int d1 = arr[j]; int d2 = arr[j + 1]; // Comparing the ascii code. if (d1 > d2) { // Swapping of the characters char temp = arr[j]; arr[j] = arr[j + 1]; arr[j + 1] = temp; j = -1; } } return arr; } // Declaring main method public static void Main(String []args) { // Declaring a String String geeks = \\\"GEEKSFORGEEKS\\\"; // Declaring a character array // to store characters of geeks in it. char []arr = geeks.ToCharArray(); // Printing the original Array. Console.WriteLine(\\\"Original array: [\\\" + String.Join(\\\", \\\", arr) + \\\"]\\\"); // Sorting the array using a single loop arr = sortArrays(arr); // Printing the sorted array. Console.WriteLine(\\\"Sorted array: [\\\" + String.Join(\\\", \\\", arr) + \\\"]\\\"); }} // This code is contributed by PrinciRaj1992\",\n \"e\": 38715,\n \"s\": 37242,\n \"text\": null\n },\n {\n \"code\": \"\",\n \"e\": 39984,\n \"s\": 38715,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 40094,\n \"s\": 39984,\n \"text\": \"Original array: [G, E, E, K, S, F, O, R, G, E, E, K, S]\\nSorted array: [E, E, E, E, F, G, G, K, K, O, R, S, S]\"\n },\n {\n \"code\": null,\n \"e\": 40346,\n \"s\": 40096,\n \"text\": \"Is sorting array in single loop better than sorting in more than one loop? Sorting in a single loop, though it seems to be better, is not an efficient approach. Below are some points to be taken into consideration before using single loop sorting: \"\n },\n {\n \"code\": null,\n \"e\": 40393,\n \"s\": 40346,\n \"text\": \"Using a single loop only helps in shorter code\"\n },\n {\n \"code\": null,\n \"e\": 40507,\n \"s\": 40393,\n \"text\": \"The time complexity of the sorting does not change in a single loop (in comparison to more than one loop sorting)\"\n },\n {\n \"code\": null,\n \"e\": 40610,\n \"s\": 40507,\n \"text\": \"Single loop sorting shows that number of loops has little to do with time complexity of the algorithm.\"\n },\n {\n \"code\": null,\n \"e\": 40627,\n \"s\": 40612,\n \"text\": \"mohit kumar 29\"\n },\n {\n \"code\": null,\n \"e\": 40637,\n \"s\": 40627,\n \"text\": \"Rajput-Ji\"\n },\n {\n \"code\": null,\n \"e\": 40651,\n \"s\": 40637,\n \"text\": \"princiraj1992\"\n },\n {\n \"code\": null,\n \"e\": 40666,\n \"s\": 40651,\n \"text\": \"SHUBHAMSINGH10\"\n },\n {\n \"code\": null,\n \"e\": 40676,\n \"s\": 40666,\n \"text\": \"patel2127\"\n },\n {\n \"code\": null,\n \"e\": 40695,\n \"s\": 40676,\n \"text\": \"shivanisinghss2110\"\n },\n {\n \"code\": null,\n \"e\": 40703,\n \"s\": 40695,\n \"text\": \"Sorting\"\n },\n {\n \"code\": null,\n \"e\": 40722,\n \"s\": 40703,\n \"text\": \"Technical Scripter\"\n },\n {\n \"code\": null,\n \"e\": 40730,\n \"s\": 40722,\n \"text\": \"Sorting\"\n },\n {\n \"code\": null,\n \"e\": 40828,\n \"s\": 40730,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 40859,\n \"s\": 40828,\n \"text\": \"Chocolate Distribution Problem\"\n },\n {\n \"code\": null,\n \"e\": 40885,\n \"s\": 40859,\n \"text\": \"C++ Program for QuickSort\"\n },\n {\n \"code\": null,\n \"e\": 40917,\n \"s\": 40885,\n \"text\": \"Stability in sorting algorithms\"\n },\n {\n \"code\": null,\n \"e\": 40942,\n \"s\": 40917,\n \"text\": \"Quick Sort vs Merge Sort\"\n },\n {\n \"code\": null,\n \"e\": 40958,\n \"s\": 40942,\n \"text\": \"Sorting in Java\"\n },\n {\n \"code\": null,\n \"e\": 40980,\n \"s\": 40958,\n \"text\": \"Quickselect Algorithm\"\n },\n {\n \"code\": null,\n \"e\": 41002,\n \"s\": 40980,\n \"text\": \"Recursive Bubble Sort\"\n },\n {\n \"code\": null,\n \"e\": 41039,\n \"s\": 41002,\n \"text\": \"Check if two arrays are equal or not\"\n },\n {\n \"code\": null,\n \"e\": 41075,\n \"s\": 41039,\n \"text\": \"Longest Common Prefix using Sorting\"\n }\n]"}}},{"rowIdx":546,"cells":{"title":{"kind":"string","value":"C - Input and Output"},"text":{"kind":"string","value":"When we say Input, it means to feed some data into a program. An input can be given in the form of a file or from the command line. C programming provides a set of built-in functions to read the given input and feed it to the program as per requirement.\nWhen we say Output, it means to display some data on screen, printer, or in any file. C programming provides a set of built-in functions to output the data on the computer screen as well as to save it in text or binary files.\nC programming treats all the devices as files. So devices such as the display are addressed in the same way as files and the following three files are automatically opened when a program executes to provide access to the keyboard and screen.\nThe file pointers are the means to access the file for reading and writing purpose. This section explains how to read values from the screen and how to print the result on the screen.\nThe int getchar(void) function reads the next available character from the screen and returns it as an integer. This function reads only single character at a time. You can use this method in the loop in case you want to read more than one character from the screen.\nThe int putchar(int c) function puts the passed character on the screen and returns the same character. This function puts only single character at a time. You can use this method in the loop in case you want to display more than one character on the screen. Check the following example −\n#include \nint main( ) {\n\n int c;\n\n printf( \"Enter a value :\");\n c = getchar( );\n\n printf( \"\\nYou entered: \");\n putchar( c );\n\n return 0;\n}\nWhen the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then the program proceeds and reads only a single character and displays it as follows −\n$./a.out\nEnter a value : this is test\nYou entered: t\n\nThe char *gets(char *s) function reads a line from stdin into the buffer pointed to by s until either a terminating newline or EOF (End of File).\nThe int puts(const char *s) function writes the string 's' and 'a' trailing newline to stdout.\nNOTE: Though it has been deprecated to use gets() function, Instead of using gets, you want to use fgets().\n#include \nint main( ) {\n\n char str[100];\n\n printf( \"Enter a value :\");\n gets( str );\n\n printf( \"\\nYou entered: \");\n puts( str );\n\n return 0;\n}\nWhen the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then the program proceeds and reads the complete line till end, and displays it as follows −\n$./a.out\nEnter a value : this is test\nYou entered: this is test\n\nThe int scanf(const char *format, ...) function reads the input from the standard input stream stdin and scans that input according to the format provided.\nThe int printf(const char *format, ...) function writes the output to the standard output stream stdout and produces the output according to the format provided.\nThe format can be a simple constant string, but you can specify %s, %d, %c, %f, etc., to print or read strings, integer, character or float respectively. There are many other formatting options available which can be used based on requirements. Let us now proceed with a simple example to understand the concepts better −\n#include \nint main( ) {\n\n char str[100];\n int i;\n\n printf( \"Enter a value :\");\n scanf(\"%s %d\", str, &i);\n\n printf( \"\\nYou entered: %s %d \", str, i);\n\n return 0;\n}\nWhen the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then program proceeds and reads the input and displays it as follows −\n$./a.out\nEnter a value : seven 7\nYou entered: seven 7\n\nHere, it should be noted that scanf() expects input in the same format as you provided %s and %d, which means you have to provide valid inputs like \"string integer\". If you provide \"string string\" or \"integer integer\", then it will be assumed as wrong input. Secondly, while reading a string, scanf() stops reading as soon as it encounters a space, so \"this is test\" are three strings for scanf().\n Print\n Add Notes\n Bookmark this page"},"parsed":{"kind":"list like","value":[{"code":null,"e":2338,"s":2084,"text":"When we say Input, it means to feed some data into a program. An input can be given in the form of a file or from the command line. C programming provides a set of built-in functions to read the given input and feed it to the program as per requirement."},{"code":null,"e":2564,"s":2338,"text":"When we say Output, it means to display some data on screen, printer, or in any file. C programming provides a set of built-in functions to output the data on the computer screen as well as to save it in text or binary files."},{"code":null,"e":2806,"s":2564,"text":"C programming treats all the devices as files. So devices such as the display are addressed in the same way as files and the following three files are automatically opened when a program executes to provide access to the keyboard and screen."},{"code":null,"e":2990,"s":2806,"text":"The file pointers are the means to access the file for reading and writing purpose. This section explains how to read values from the screen and how to print the result on the screen."},{"code":null,"e":3257,"s":2990,"text":"The int getchar(void) function reads the next available character from the screen and returns it as an integer. This function reads only single character at a time. You can use this method in the loop in case you want to read more than one character from the screen."},{"code":null,"e":3546,"s":3257,"text":"The int putchar(int c) function puts the passed character on the screen and returns the same character. This function puts only single character at a time. You can use this method in the loop in case you want to display more than one character on the screen. Check the following example −"},{"code":null,"e":3706,"s":3546,"text":"#include \nint main( ) {\n\n int c;\n\n printf( \"Enter a value :\");\n c = getchar( );\n\n printf( \"\\nYou entered: \");\n putchar( c );\n\n return 0;\n}"},{"code":null,"e":3917,"s":3706,"text":"When the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then the program proceeds and reads only a single character and displays it as follows −"},{"code":null,"e":3971,"s":3917,"text":"$./a.out\nEnter a value : this is test\nYou entered: t\n"},{"code":null,"e":4117,"s":3971,"text":"The char *gets(char *s) function reads a line from stdin into the buffer pointed to by s until either a terminating newline or EOF (End of File)."},{"code":null,"e":4212,"s":4117,"text":"The int puts(const char *s) function writes the string 's' and 'a' trailing newline to stdout."},{"code":null,"e":4320,"s":4212,"text":"NOTE: Though it has been deprecated to use gets() function, Instead of using gets, you want to use fgets()."},{"code":null,"e":4484,"s":4320,"text":"#include \nint main( ) {\n\n char str[100];\n\n printf( \"Enter a value :\");\n gets( str );\n\n printf( \"\\nYou entered: \");\n puts( str );\n\n return 0;\n}"},{"code":null,"e":4699,"s":4484,"text":"When the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then the program proceeds and reads the complete line till end, and displays it as follows −"},{"code":null,"e":4764,"s":4699,"text":"$./a.out\nEnter a value : this is test\nYou entered: this is test\n"},{"code":null,"e":4921,"s":4764,"text":"The int scanf(const char *format, ...) function reads the input from the standard input stream stdin and scans that input according to the format provided."},{"code":null,"e":5083,"s":4921,"text":"The int printf(const char *format, ...) function writes the output to the standard output stream stdout and produces the output according to the format provided."},{"code":null,"e":5405,"s":5083,"text":"The format can be a simple constant string, but you can specify %s, %d, %c, %f, etc., to print or read strings, integer, character or float respectively. There are many other formatting options available which can be used based on requirements. Let us now proceed with a simple example to understand the concepts better −"},{"code":null,"e":5589,"s":5405,"text":"#include \nint main( ) {\n\n char str[100];\n int i;\n\n printf( \"Enter a value :\");\n scanf(\"%s %d\", str, &i);\n\n printf( \"\\nYou entered: %s %d \", str, i);\n\n return 0;\n}"},{"code":null,"e":5782,"s":5589,"text":"When the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then program proceeds and reads the input and displays it as follows −"},{"code":null,"e":5837,"s":5782,"text":"$./a.out\nEnter a value : seven 7\nYou entered: seven 7\n"},{"code":null,"e":6235,"s":5837,"text":"Here, it should be noted that scanf() expects input in the same format as you provided %s and %d, which means you have to provide valid inputs like \"string integer\". If you provide \"string string\" or \"integer integer\", then it will be assumed as wrong input. Secondly, while reading a string, scanf() stops reading as soon as it encounters a space, so \"this is test\" are three strings for scanf()."},{"code":null,"e":6242,"s":6235,"text":" Print"},{"code":null,"e":6253,"s":6242,"text":" Add Notes"}],"string":"[\n {\n \"code\": null,\n \"e\": 2338,\n \"s\": 2084,\n \"text\": \"When we say Input, it means to feed some data into a program. An input can be given in the form of a file or from the command line. C programming provides a set of built-in functions to read the given input and feed it to the program as per requirement.\"\n },\n {\n \"code\": null,\n \"e\": 2564,\n \"s\": 2338,\n \"text\": \"When we say Output, it means to display some data on screen, printer, or in any file. C programming provides a set of built-in functions to output the data on the computer screen as well as to save it in text or binary files.\"\n },\n {\n \"code\": null,\n \"e\": 2806,\n \"s\": 2564,\n \"text\": \"C programming treats all the devices as files. So devices such as the display are addressed in the same way as files and the following three files are automatically opened when a program executes to provide access to the keyboard and screen.\"\n },\n {\n \"code\": null,\n \"e\": 2990,\n \"s\": 2806,\n \"text\": \"The file pointers are the means to access the file for reading and writing purpose. This section explains how to read values from the screen and how to print the result on the screen.\"\n },\n {\n \"code\": null,\n \"e\": 3257,\n \"s\": 2990,\n \"text\": \"The int getchar(void) function reads the next available character from the screen and returns it as an integer. This function reads only single character at a time. You can use this method in the loop in case you want to read more than one character from the screen.\"\n },\n {\n \"code\": null,\n \"e\": 3546,\n \"s\": 3257,\n \"text\": \"The int putchar(int c) function puts the passed character on the screen and returns the same character. This function puts only single character at a time. You can use this method in the loop in case you want to display more than one character on the screen. Check the following example −\"\n },\n {\n \"code\": null,\n \"e\": 3706,\n \"s\": 3546,\n \"text\": \"#include \\nint main( ) {\\n\\n int c;\\n\\n printf( \\\"Enter a value :\\\");\\n c = getchar( );\\n\\n printf( \\\"\\\\nYou entered: \\\");\\n putchar( c );\\n\\n return 0;\\n}\"\n },\n {\n \"code\": null,\n \"e\": 3917,\n \"s\": 3706,\n \"text\": \"When the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then the program proceeds and reads only a single character and displays it as follows −\"\n },\n {\n \"code\": null,\n \"e\": 3971,\n \"s\": 3917,\n \"text\": \"$./a.out\\nEnter a value : this is test\\nYou entered: t\\n\"\n },\n {\n \"code\": null,\n \"e\": 4117,\n \"s\": 3971,\n \"text\": \"The char *gets(char *s) function reads a line from stdin into the buffer pointed to by s until either a terminating newline or EOF (End of File).\"\n },\n {\n \"code\": null,\n \"e\": 4212,\n \"s\": 4117,\n \"text\": \"The int puts(const char *s) function writes the string 's' and 'a' trailing newline to stdout.\"\n },\n {\n \"code\": null,\n \"e\": 4320,\n \"s\": 4212,\n \"text\": \"NOTE: Though it has been deprecated to use gets() function, Instead of using gets, you want to use fgets().\"\n },\n {\n \"code\": null,\n \"e\": 4484,\n \"s\": 4320,\n \"text\": \"#include \\nint main( ) {\\n\\n char str[100];\\n\\n printf( \\\"Enter a value :\\\");\\n gets( str );\\n\\n printf( \\\"\\\\nYou entered: \\\");\\n puts( str );\\n\\n return 0;\\n}\"\n },\n {\n \"code\": null,\n \"e\": 4699,\n \"s\": 4484,\n \"text\": \"When the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then the program proceeds and reads the complete line till end, and displays it as follows −\"\n },\n {\n \"code\": null,\n \"e\": 4764,\n \"s\": 4699,\n \"text\": \"$./a.out\\nEnter a value : this is test\\nYou entered: this is test\\n\"\n },\n {\n \"code\": null,\n \"e\": 4921,\n \"s\": 4764,\n \"text\": \"The int scanf(const char *format, ...) function reads the input from the standard input stream stdin and scans that input according to the format provided.\"\n },\n {\n \"code\": null,\n \"e\": 5083,\n \"s\": 4921,\n \"text\": \"The int printf(const char *format, ...) function writes the output to the standard output stream stdout and produces the output according to the format provided.\"\n },\n {\n \"code\": null,\n \"e\": 5405,\n \"s\": 5083,\n \"text\": \"The format can be a simple constant string, but you can specify %s, %d, %c, %f, etc., to print or read strings, integer, character or float respectively. There are many other formatting options available which can be used based on requirements. Let us now proceed with a simple example to understand the concepts better −\"\n },\n {\n \"code\": null,\n \"e\": 5589,\n \"s\": 5405,\n \"text\": \"#include \\nint main( ) {\\n\\n char str[100];\\n int i;\\n\\n printf( \\\"Enter a value :\\\");\\n scanf(\\\"%s %d\\\", str, &i);\\n\\n printf( \\\"\\\\nYou entered: %s %d \\\", str, i);\\n\\n return 0;\\n}\"\n },\n {\n \"code\": null,\n \"e\": 5782,\n \"s\": 5589,\n \"text\": \"When the above code is compiled and executed, it waits for you to input some text. When you enter a text and press enter, then program proceeds and reads the input and displays it as follows −\"\n },\n {\n \"code\": null,\n \"e\": 5837,\n \"s\": 5782,\n \"text\": \"$./a.out\\nEnter a value : seven 7\\nYou entered: seven 7\\n\"\n },\n {\n \"code\": null,\n \"e\": 6235,\n \"s\": 5837,\n \"text\": \"Here, it should be noted that scanf() expects input in the same format as you provided %s and %d, which means you have to provide valid inputs like \\\"string integer\\\". If you provide \\\"string string\\\" or \\\"integer integer\\\", then it will be assumed as wrong input. Secondly, while reading a string, scanf() stops reading as soon as it encounters a space, so \\\"this is test\\\" are three strings for scanf().\"\n },\n {\n \"code\": null,\n \"e\": 6242,\n \"s\": 6235,\n \"text\": \" Print\"\n },\n {\n \"code\": null,\n \"e\": 6253,\n \"s\": 6242,\n \"text\": \" Add Notes\"\n }\n]"}}},{"rowIdx":547,"cells":{"title":{"kind":"string","value":"Java Program For Closest Prime Number - GeeksforGeeks"},"text":{"kind":"string","value":"22 Dec, 2021\nGiven a number N, you have to print its closest prime number. The prime number can be lesser, equal, or greater than the given number.\nCondition: 1 ≤ N ≤ 100000\nExamples:\nInput : 16\nOutput: 17\n\nExplanation: The two nearer prime number of 16 are 13 and 17. But among these, \n17 is the closest(As its distance is only 1(17-16) from the given number).\n\nInput : 97\nOutput : 97\n\nExplanation : The closest prime number in this case is the given number number \nitself as the distance is 0 (97-97).\nApproach : \nUsing Sieve of Eratosthenes store all prime numbers in a Vector.Copy all elements in vector to the new array.Use the upper bound to find the upper bound of the given number in an array.As the array is already sorted in nature, compare previous and current indexed numbers in an array.Return number with the smallest difference.\nUsing Sieve of Eratosthenes store all prime numbers in a Vector.\nCopy all elements in vector to the new array.\nUse the upper bound to find the upper bound of the given number in an array.\nAs the array is already sorted in nature, compare previous and current indexed numbers in an array.\nReturn number with the smallest difference.\nBelow is the implementation of the approach.\nJava\n// Closest Prime Number in Java import java.util.*;import java.lang.*; public class GFG { static int max = 100005; static Vector primeNumber = new Vector<>(); static void sieveOfEratosthenes() { // Create a boolean array \"prime[0..n]\" and // initialize all entries it as true. A value // in prime[i] will finally be false if i is // Not a prime, else true. boolean prime[] = new boolean[max + 1]; for (int i = 0; i <= max; i++) prime[i] = true; for (int p = 2; p * p <= max; p++) { // If prime[p] is not changed, then it is a // prime if (prime[p] == true) { // Update all multiples of p for (int i = p * p; i <= max; i += p) prime[i] = false; } } // Print all prime numbers for (int i = 2; i <= max; i++) { if (prime[i] == true) primeNumber.add(i); } } static int upper_bound(Integer arr[], int low, int high, int X) { // Base Case if (low > high) return low; // Find the middle index int mid = low + (high - low) / 2; // If arr[mid] is less than // or equal to X search in // right subarray if (arr[mid] <= X) { return upper_bound(arr, mid + 1, high, X); } // If arr[mid] is greater than X // then search in left subarray return upper_bound(arr, low, mid - 1, X); } public static int closetPrime(int number) { // We will handle it (for number = 1) explicitly // as the lower/left number of 1 can give us // negative index which will cost Runtime Error. if (number == 1) return 2; else { // calling sieve of eratosthenes to // fill the array into prime numbers sieveOfEratosthenes(); Integer[] arr = primeNumber.toArray( new Integer[primeNumber.size()]); // searching the index int index = upper_bound(arr, 0, arr.length, number); if (arr[index] == number || arr[index - 1] == number) return number; else if (Math.abs(arr[index] - number) < Math.abs(arr[index - 1] - number)) return arr[index]; else return arr[index - 1]; } } // Driver Program public static void main(String[] args) { int number = 100; System.out.println(closetPrime(number)); }}\n101\nTime Complexity: O(N log(log(N)))\nSpace Complexity: O(N)\njyoti369\nsurindertarika1234\nJava\nJava Programs\nJava\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nComments\nOld Comments\nFunctional Interfaces in Java\nStream In Java\nConstructors in Java\nDifferent ways of Reading a text file in Java\nExceptions in Java\nConvert a String to Character array in Java\nJava Programming Examples\nConvert Double to Integer in Java\nImplementing a Linked List in Java using Class\nHow to Iterate HashMap in Java?"},"parsed":{"kind":"list like","value":[{"code":null,"e":23583,"s":23555,"text":"\n22 Dec, 2021"},{"code":null,"e":23718,"s":23583,"text":"Given a number N, you have to print its closest prime number. The prime number can be lesser, equal, or greater than the given number."},{"code":null,"e":23744,"s":23718,"text":"Condition: 1 ≤ N ≤ 100000"},{"code":null,"e":23754,"s":23744,"text":"Examples:"},{"code":null,"e":24074,"s":23754,"text":"Input : 16\nOutput: 17\n\nExplanation: The two nearer prime number of 16 are 13 and 17. But among these, \n17 is the closest(As its distance is only 1(17-16) from the given number).\n\nInput : 97\nOutput : 97\n\nExplanation : The closest prime number in this case is the given number number \nitself as the distance is 0 (97-97)."},{"code":null,"e":24086,"s":24074,"text":"Approach : "},{"code":null,"e":24414,"s":24086,"text":"Using Sieve of Eratosthenes store all prime numbers in a Vector.Copy all elements in vector to the new array.Use the upper bound to find the upper bound of the given number in an array.As the array is already sorted in nature, compare previous and current indexed numbers in an array.Return number with the smallest difference."},{"code":null,"e":24479,"s":24414,"text":"Using Sieve of Eratosthenes store all prime numbers in a Vector."},{"code":null,"e":24525,"s":24479,"text":"Copy all elements in vector to the new array."},{"code":null,"e":24602,"s":24525,"text":"Use the upper bound to find the upper bound of the given number in an array."},{"code":null,"e":24702,"s":24602,"text":"As the array is already sorted in nature, compare previous and current indexed numbers in an array."},{"code":null,"e":24746,"s":24702,"text":"Return number with the smallest difference."},{"code":null,"e":24791,"s":24746,"text":"Below is the implementation of the approach."},{"code":null,"e":24796,"s":24791,"text":"Java"},{"code":"// Closest Prime Number in Java import java.util.*;import java.lang.*; public class GFG { static int max = 100005; static Vector primeNumber = new Vector<>(); static void sieveOfEratosthenes() { // Create a boolean array \"prime[0..n]\" and // initialize all entries it as true. A value // in prime[i] will finally be false if i is // Not a prime, else true. boolean prime[] = new boolean[max + 1]; for (int i = 0; i <= max; i++) prime[i] = true; for (int p = 2; p * p <= max; p++) { // If prime[p] is not changed, then it is a // prime if (prime[p] == true) { // Update all multiples of p for (int i = p * p; i <= max; i += p) prime[i] = false; } } // Print all prime numbers for (int i = 2; i <= max; i++) { if (prime[i] == true) primeNumber.add(i); } } static int upper_bound(Integer arr[], int low, int high, int X) { // Base Case if (low > high) return low; // Find the middle index int mid = low + (high - low) / 2; // If arr[mid] is less than // or equal to X search in // right subarray if (arr[mid] <= X) { return upper_bound(arr, mid + 1, high, X); } // If arr[mid] is greater than X // then search in left subarray return upper_bound(arr, low, mid - 1, X); } public static int closetPrime(int number) { // We will handle it (for number = 1) explicitly // as the lower/left number of 1 can give us // negative index which will cost Runtime Error. if (number == 1) return 2; else { // calling sieve of eratosthenes to // fill the array into prime numbers sieveOfEratosthenes(); Integer[] arr = primeNumber.toArray( new Integer[primeNumber.size()]); // searching the index int index = upper_bound(arr, 0, arr.length, number); if (arr[index] == number || arr[index - 1] == number) return number; else if (Math.abs(arr[index] - number) < Math.abs(arr[index - 1] - number)) return arr[index]; else return arr[index - 1]; } } // Driver Program public static void main(String[] args) { int number = 100; System.out.println(closetPrime(number)); }}","e":27432,"s":24796,"text":null},{"code":null,"e":27436,"s":27432,"text":"101"},{"code":null,"e":27470,"s":27436,"text":"Time Complexity: O(N log(log(N)))"},{"code":null,"e":27493,"s":27470,"text":"Space Complexity: O(N)"},{"code":null,"e":27502,"s":27493,"text":"jyoti369"},{"code":null,"e":27521,"s":27502,"text":"surindertarika1234"},{"code":null,"e":27526,"s":27521,"text":"Java"},{"code":null,"e":27540,"s":27526,"text":"Java Programs"},{"code":null,"e":27545,"s":27540,"text":"Java"},{"code":null,"e":27643,"s":27545,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":27652,"s":27643,"text":"Comments"},{"code":null,"e":27665,"s":27652,"text":"Old Comments"},{"code":null,"e":27695,"s":27665,"text":"Functional Interfaces in Java"},{"code":null,"e":27710,"s":27695,"text":"Stream In Java"},{"code":null,"e":27731,"s":27710,"text":"Constructors in Java"},{"code":null,"e":27777,"s":27731,"text":"Different ways of Reading a text file in Java"},{"code":null,"e":27796,"s":27777,"text":"Exceptions in Java"},{"code":null,"e":27840,"s":27796,"text":"Convert a String to Character array in Java"},{"code":null,"e":27866,"s":27840,"text":"Java Programming Examples"},{"code":null,"e":27900,"s":27866,"text":"Convert Double to Integer in Java"},{"code":null,"e":27947,"s":27900,"text":"Implementing a Linked List in Java using Class"}],"string":"[\n {\n \"code\": null,\n \"e\": 23583,\n \"s\": 23555,\n \"text\": \"\\n22 Dec, 2021\"\n },\n {\n \"code\": null,\n \"e\": 23718,\n \"s\": 23583,\n \"text\": \"Given a number N, you have to print its closest prime number. The prime number can be lesser, equal, or greater than the given number.\"\n },\n {\n \"code\": null,\n \"e\": 23744,\n \"s\": 23718,\n \"text\": \"Condition: 1 ≤ N ≤ 100000\"\n },\n {\n \"code\": null,\n \"e\": 23754,\n \"s\": 23744,\n \"text\": \"Examples:\"\n },\n {\n \"code\": null,\n \"e\": 24074,\n \"s\": 23754,\n \"text\": \"Input : 16\\nOutput: 17\\n\\nExplanation: The two nearer prime number of 16 are 13 and 17. But among these, \\n17 is the closest(As its distance is only 1(17-16) from the given number).\\n\\nInput : 97\\nOutput : 97\\n\\nExplanation : The closest prime number in this case is the given number number \\nitself as the distance is 0 (97-97).\"\n },\n {\n \"code\": null,\n \"e\": 24086,\n \"s\": 24074,\n \"text\": \"Approach : \"\n },\n {\n \"code\": null,\n \"e\": 24414,\n \"s\": 24086,\n \"text\": \"Using Sieve of Eratosthenes store all prime numbers in a Vector.Copy all elements in vector to the new array.Use the upper bound to find the upper bound of the given number in an array.As the array is already sorted in nature, compare previous and current indexed numbers in an array.Return number with the smallest difference.\"\n },\n {\n \"code\": null,\n \"e\": 24479,\n \"s\": 24414,\n \"text\": \"Using Sieve of Eratosthenes store all prime numbers in a Vector.\"\n },\n {\n \"code\": null,\n \"e\": 24525,\n \"s\": 24479,\n \"text\": \"Copy all elements in vector to the new array.\"\n },\n {\n \"code\": null,\n \"e\": 24602,\n \"s\": 24525,\n \"text\": \"Use the upper bound to find the upper bound of the given number in an array.\"\n },\n {\n \"code\": null,\n \"e\": 24702,\n \"s\": 24602,\n \"text\": \"As the array is already sorted in nature, compare previous and current indexed numbers in an array.\"\n },\n {\n \"code\": null,\n \"e\": 24746,\n \"s\": 24702,\n \"text\": \"Return number with the smallest difference.\"\n },\n {\n \"code\": null,\n \"e\": 24791,\n \"s\": 24746,\n \"text\": \"Below is the implementation of the approach.\"\n },\n {\n \"code\": null,\n \"e\": 24796,\n \"s\": 24791,\n \"text\": \"Java\"\n },\n {\n \"code\": \"// Closest Prime Number in Java import java.util.*;import java.lang.*; public class GFG { static int max = 100005; static Vector primeNumber = new Vector<>(); static void sieveOfEratosthenes() { // Create a boolean array \\\"prime[0..n]\\\" and // initialize all entries it as true. A value // in prime[i] will finally be false if i is // Not a prime, else true. boolean prime[] = new boolean[max + 1]; for (int i = 0; i <= max; i++) prime[i] = true; for (int p = 2; p * p <= max; p++) { // If prime[p] is not changed, then it is a // prime if (prime[p] == true) { // Update all multiples of p for (int i = p * p; i <= max; i += p) prime[i] = false; } } // Print all prime numbers for (int i = 2; i <= max; i++) { if (prime[i] == true) primeNumber.add(i); } } static int upper_bound(Integer arr[], int low, int high, int X) { // Base Case if (low > high) return low; // Find the middle index int mid = low + (high - low) / 2; // If arr[mid] is less than // or equal to X search in // right subarray if (arr[mid] <= X) { return upper_bound(arr, mid + 1, high, X); } // If arr[mid] is greater than X // then search in left subarray return upper_bound(arr, low, mid - 1, X); } public static int closetPrime(int number) { // We will handle it (for number = 1) explicitly // as the lower/left number of 1 can give us // negative index which will cost Runtime Error. if (number == 1) return 2; else { // calling sieve of eratosthenes to // fill the array into prime numbers sieveOfEratosthenes(); Integer[] arr = primeNumber.toArray( new Integer[primeNumber.size()]); // searching the index int index = upper_bound(arr, 0, arr.length, number); if (arr[index] == number || arr[index - 1] == number) return number; else if (Math.abs(arr[index] - number) < Math.abs(arr[index - 1] - number)) return arr[index]; else return arr[index - 1]; } } // Driver Program public static void main(String[] args) { int number = 100; System.out.println(closetPrime(number)); }}\",\n \"e\": 27432,\n \"s\": 24796,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27436,\n \"s\": 27432,\n \"text\": \"101\"\n },\n {\n \"code\": null,\n \"e\": 27470,\n \"s\": 27436,\n \"text\": \"Time Complexity: O(N log(log(N)))\"\n },\n {\n \"code\": null,\n \"e\": 27493,\n \"s\": 27470,\n \"text\": \"Space Complexity: O(N)\"\n },\n {\n \"code\": null,\n \"e\": 27502,\n \"s\": 27493,\n \"text\": \"jyoti369\"\n },\n {\n \"code\": null,\n \"e\": 27521,\n \"s\": 27502,\n \"text\": \"surindertarika1234\"\n },\n {\n \"code\": null,\n \"e\": 27526,\n \"s\": 27521,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 27540,\n \"s\": 27526,\n \"text\": \"Java Programs\"\n },\n {\n \"code\": null,\n \"e\": 27545,\n \"s\": 27540,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 27643,\n \"s\": 27545,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 27652,\n \"s\": 27643,\n \"text\": \"Comments\"\n },\n {\n \"code\": null,\n \"e\": 27665,\n \"s\": 27652,\n \"text\": \"Old Comments\"\n },\n {\n \"code\": null,\n \"e\": 27695,\n \"s\": 27665,\n \"text\": \"Functional Interfaces in Java\"\n },\n {\n \"code\": null,\n \"e\": 27710,\n \"s\": 27695,\n \"text\": \"Stream In Java\"\n },\n {\n \"code\": null,\n \"e\": 27731,\n \"s\": 27710,\n \"text\": \"Constructors in Java\"\n },\n {\n \"code\": null,\n \"e\": 27777,\n \"s\": 27731,\n \"text\": \"Different ways of Reading a text file in Java\"\n },\n {\n \"code\": null,\n \"e\": 27796,\n \"s\": 27777,\n \"text\": \"Exceptions in Java\"\n },\n {\n \"code\": null,\n \"e\": 27840,\n \"s\": 27796,\n \"text\": \"Convert a String to Character array in Java\"\n },\n {\n \"code\": null,\n \"e\": 27866,\n \"s\": 27840,\n \"text\": \"Java Programming Examples\"\n },\n {\n \"code\": null,\n \"e\": 27900,\n \"s\": 27866,\n \"text\": \"Convert Double to Integer in Java\"\n },\n {\n \"code\": null,\n \"e\": 27947,\n \"s\": 27900,\n \"text\": \"Implementing a Linked List in Java using Class\"\n }\n]"}}},{"rowIdx":548,"cells":{"title":{"kind":"string","value":"How to Use Lambda for Efficient Python Code | by Khuyen Tran | Towards Data Science"},"text":{"kind":"string","value":"What this graph reminds you of? Amplification of amplitude in a wave? How can this be made? You take a closer look at the graph and realize something interesting. It looks like as x increases, the value on the y-axis first goes down to -1 then goes up to 1, goes down to -2 then goes up to 2, and so on. This gives you the idea that the graph can be made using a list like this\n[0, -1, 1, -2, 2, -3, 3, -4, 4, -5, 5, -6, 6, -7, 7, -8, 8, -9, 9]\nIf you use matplotlib with the list above as an argument, you should have the graph that is close to the graph above:\nThis looks easy. Once you understand how the graph could be created using a list, you could easily recreate the wave graph. But what if you want to create a much cooler graph that requires a much bigger list, like this?\nYou definitely don’t want to write the list above manually with 200 numbers on the list. So you have to figure out how to use Python to create the list with the number range from (n,m)\nbut with the 0 starts at the beginning, proceeded by -1, 1,-2,2,...n.\nThis list is not difficult to create if we realize that the list sorted by the absolute value of the elements |0|, |-1|, |1|, |-2|, |2|, .., n. So something with sorted and absolute value would work? Bingo! I will show you how to easily create this sort of function on one line of code with lambda.\nHey, I am glad you ask. lambda keyword in Python provides a shortcut for declaring small anonymous functions. It behaves just like regular functions. They can be used as an alternative for function objects. Let’s have a small example of how to replace function with lambdas:\ndef mult(x,y): return x*ymult(2,6)\nOutcome: 12\nWe can make the code above shorter with lambda\nmult = lambda x, y: x*ymult(2,6)\nBut do we need to define the name of the multiplication anyway? After all, we just want to create a function that could multiply 2 numbers right? We can reduce the function above to even simpler code:\n(lambda x, y: x*y)(2, 6)\nThat looks shorter. But why should you bother to learn about lambda , just to save a few lines of code anyway? Because lambda could help you create something more sophisticated easily. Like sorting a list of tuples based on their alphabets:\ntuples = [(1, 'd'), (2, 'b'), (4, 'a'), (3, 'c')]sorted(tuples, key=lambda x: x[1])\nOutcome:\n[(4, 'a'), (2, 'b'), (3, 'c'), (1, 'd')]\nThis gives you a hint of how to create a graph above using lambda. Ready? Here it is:\nimport matplotlib.pyplot as pltnums = plt.plot(sorted(range(-100, 101), key=lambda x: x * x))plt.plot(nums)\nGive us the graph:\nWant to create a function that could take a number to the power of another number? We could use the combination of nested function and lambda :\ndef power(n): return lambda x: x**npower_3 = power(3)list(map(power_3,[2,3,4,5]))\nOutcome:\n[8, 27, 64, 125]\nOr use lambda to manipulate a pandas Dataframe\nimport pandas as pddf = pd.DataFrame([[1,2,3],[4,5,6]], columns=['a','b','c'])\n#Create the 4th column that is the sum of the other 3 columnsdf['d'] = df.apply(lambda row: row['a']+row['b']+row['c'],axis=1)\nCongratulation! Now you know how to use lambda as a shortcut for functions. I encourage you to use lambda when you want to create a nameless function for a short period of time. You could also use it as an argument to a higher-order function like above or used along with functions like filter() , map() , apply().\nI hope this tutorial will give you some motivations and reasons to switch some of your Python code with lambda . A small change in your code could give you a big return in time and efficiency as you increasingly incorporate more sophisticated code for your data science projects.\nFeel free to fork and play with the code for this article in this Github repo.\nI like to write about basic data science concepts and play with different algorithms and data science tools. You could connect with me on LinkedIn and Twitter.\nStar this repo if you want to check out the codes for all of the articles I have written. Follow me on Medium to stay informed with my latest data science articles like these:"},"parsed":{"kind":"list like","value":[{"code":null,"e":549,"s":171,"text":"What this graph reminds you of? Amplification of amplitude in a wave? How can this be made? You take a closer look at the graph and realize something interesting. It looks like as x increases, the value on the y-axis first goes down to -1 then goes up to 1, goes down to -2 then goes up to 2, and so on. This gives you the idea that the graph can be made using a list like this"},{"code":null,"e":616,"s":549,"text":"[0, -1, 1, -2, 2, -3, 3, -4, 4, -5, 5, -6, 6, -7, 7, -8, 8, -9, 9]"},{"code":null,"e":734,"s":616,"text":"If you use matplotlib with the list above as an argument, you should have the graph that is close to the graph above:"},{"code":null,"e":954,"s":734,"text":"This looks easy. Once you understand how the graph could be created using a list, you could easily recreate the wave graph. But what if you want to create a much cooler graph that requires a much bigger list, like this?"},{"code":null,"e":1139,"s":954,"text":"You definitely don’t want to write the list above manually with 200 numbers on the list. So you have to figure out how to use Python to create the list with the number range from (n,m)"},{"code":null,"e":1209,"s":1139,"text":"but with the 0 starts at the beginning, proceeded by -1, 1,-2,2,...n."},{"code":null,"e":1508,"s":1209,"text":"This list is not difficult to create if we realize that the list sorted by the absolute value of the elements |0|, |-1|, |1|, |-2|, |2|, .., n. So something with sorted and absolute value would work? Bingo! I will show you how to easily create this sort of function on one line of code with lambda."},{"code":null,"e":1783,"s":1508,"text":"Hey, I am glad you ask. lambda keyword in Python provides a shortcut for declaring small anonymous functions. It behaves just like regular functions. They can be used as an alternative for function objects. Let’s have a small example of how to replace function with lambdas:"},{"code":null,"e":1819,"s":1783,"text":"def mult(x,y): return x*ymult(2,6)"},{"code":null,"e":1831,"s":1819,"text":"Outcome: 12"},{"code":null,"e":1878,"s":1831,"text":"We can make the code above shorter with lambda"},{"code":null,"e":1911,"s":1878,"text":"mult = lambda x, y: x*ymult(2,6)"},{"code":null,"e":2112,"s":1911,"text":"But do we need to define the name of the multiplication anyway? After all, we just want to create a function that could multiply 2 numbers right? We can reduce the function above to even simpler code:"},{"code":null,"e":2137,"s":2112,"text":"(lambda x, y: x*y)(2, 6)"},{"code":null,"e":2378,"s":2137,"text":"That looks shorter. But why should you bother to learn about lambda , just to save a few lines of code anyway? Because lambda could help you create something more sophisticated easily. Like sorting a list of tuples based on their alphabets:"},{"code":null,"e":2462,"s":2378,"text":"tuples = [(1, 'd'), (2, 'b'), (4, 'a'), (3, 'c')]sorted(tuples, key=lambda x: x[1])"},{"code":null,"e":2471,"s":2462,"text":"Outcome:"},{"code":null,"e":2512,"s":2471,"text":"[(4, 'a'), (2, 'b'), (3, 'c'), (1, 'd')]"},{"code":null,"e":2598,"s":2512,"text":"This gives you a hint of how to create a graph above using lambda. Ready? Here it is:"},{"code":null,"e":2706,"s":2598,"text":"import matplotlib.pyplot as pltnums = plt.plot(sorted(range(-100, 101), key=lambda x: x * x))plt.plot(nums)"},{"code":null,"e":2725,"s":2706,"text":"Give us the graph:"},{"code":null,"e":2869,"s":2725,"text":"Want to create a function that could take a number to the power of another number? We could use the combination of nested function and lambda :"},{"code":null,"e":2952,"s":2869,"text":"def power(n): return lambda x: x**npower_3 = power(3)list(map(power_3,[2,3,4,5]))"},{"code":null,"e":2961,"s":2952,"text":"Outcome:"},{"code":null,"e":2978,"s":2961,"text":"[8, 27, 64, 125]"},{"code":null,"e":3025,"s":2978,"text":"Or use lambda to manipulate a pandas Dataframe"},{"code":null,"e":3104,"s":3025,"text":"import pandas as pddf = pd.DataFrame([[1,2,3],[4,5,6]], columns=['a','b','c'])"},{"code":null,"e":3231,"s":3104,"text":"#Create the 4th column that is the sum of the other 3 columnsdf['d'] = df.apply(lambda row: row['a']+row['b']+row['c'],axis=1)"},{"code":null,"e":3546,"s":3231,"text":"Congratulation! Now you know how to use lambda as a shortcut for functions. I encourage you to use lambda when you want to create a nameless function for a short period of time. You could also use it as an argument to a higher-order function like above or used along with functions like filter() , map() , apply()."},{"code":null,"e":3826,"s":3546,"text":"I hope this tutorial will give you some motivations and reasons to switch some of your Python code with lambda . A small change in your code could give you a big return in time and efficiency as you increasingly incorporate more sophisticated code for your data science projects."},{"code":null,"e":3905,"s":3826,"text":"Feel free to fork and play with the code for this article in this Github repo."},{"code":null,"e":4065,"s":3905,"text":"I like to write about basic data science concepts and play with different algorithms and data science tools. You could connect with me on LinkedIn and Twitter."}],"string":"[\n {\n \"code\": null,\n \"e\": 549,\n \"s\": 171,\n \"text\": \"What this graph reminds you of? Amplification of amplitude in a wave? How can this be made? You take a closer look at the graph and realize something interesting. It looks like as x increases, the value on the y-axis first goes down to -1 then goes up to 1, goes down to -2 then goes up to 2, and so on. This gives you the idea that the graph can be made using a list like this\"\n },\n {\n \"code\": null,\n \"e\": 616,\n \"s\": 549,\n \"text\": \"[0, -1, 1, -2, 2, -3, 3, -4, 4, -5, 5, -6, 6, -7, 7, -8, 8, -9, 9]\"\n },\n {\n \"code\": null,\n \"e\": 734,\n \"s\": 616,\n \"text\": \"If you use matplotlib with the list above as an argument, you should have the graph that is close to the graph above:\"\n },\n {\n \"code\": null,\n \"e\": 954,\n \"s\": 734,\n \"text\": \"This looks easy. Once you understand how the graph could be created using a list, you could easily recreate the wave graph. But what if you want to create a much cooler graph that requires a much bigger list, like this?\"\n },\n {\n \"code\": null,\n \"e\": 1139,\n \"s\": 954,\n \"text\": \"You definitely don’t want to write the list above manually with 200 numbers on the list. So you have to figure out how to use Python to create the list with the number range from (n,m)\"\n },\n {\n \"code\": null,\n \"e\": 1209,\n \"s\": 1139,\n \"text\": \"but with the 0 starts at the beginning, proceeded by -1, 1,-2,2,...n.\"\n },\n {\n \"code\": null,\n \"e\": 1508,\n \"s\": 1209,\n \"text\": \"This list is not difficult to create if we realize that the list sorted by the absolute value of the elements |0|, |-1|, |1|, |-2|, |2|, .., n. So something with sorted and absolute value would work? Bingo! I will show you how to easily create this sort of function on one line of code with lambda.\"\n },\n {\n \"code\": null,\n \"e\": 1783,\n \"s\": 1508,\n \"text\": \"Hey, I am glad you ask. lambda keyword in Python provides a shortcut for declaring small anonymous functions. It behaves just like regular functions. They can be used as an alternative for function objects. Let’s have a small example of how to replace function with lambdas:\"\n },\n {\n \"code\": null,\n \"e\": 1819,\n \"s\": 1783,\n \"text\": \"def mult(x,y): return x*ymult(2,6)\"\n },\n {\n \"code\": null,\n \"e\": 1831,\n \"s\": 1819,\n \"text\": \"Outcome: 12\"\n },\n {\n \"code\": null,\n \"e\": 1878,\n \"s\": 1831,\n \"text\": \"We can make the code above shorter with lambda\"\n },\n {\n \"code\": null,\n \"e\": 1911,\n \"s\": 1878,\n \"text\": \"mult = lambda x, y: x*ymult(2,6)\"\n },\n {\n \"code\": null,\n \"e\": 2112,\n \"s\": 1911,\n \"text\": \"But do we need to define the name of the multiplication anyway? After all, we just want to create a function that could multiply 2 numbers right? We can reduce the function above to even simpler code:\"\n },\n {\n \"code\": null,\n \"e\": 2137,\n \"s\": 2112,\n \"text\": \"(lambda x, y: x*y)(2, 6)\"\n },\n {\n \"code\": null,\n \"e\": 2378,\n \"s\": 2137,\n \"text\": \"That looks shorter. But why should you bother to learn about lambda , just to save a few lines of code anyway? Because lambda could help you create something more sophisticated easily. Like sorting a list of tuples based on their alphabets:\"\n },\n {\n \"code\": null,\n \"e\": 2462,\n \"s\": 2378,\n \"text\": \"tuples = [(1, 'd'), (2, 'b'), (4, 'a'), (3, 'c')]sorted(tuples, key=lambda x: x[1])\"\n },\n {\n \"code\": null,\n \"e\": 2471,\n \"s\": 2462,\n \"text\": \"Outcome:\"\n },\n {\n \"code\": null,\n \"e\": 2512,\n \"s\": 2471,\n \"text\": \"[(4, 'a'), (2, 'b'), (3, 'c'), (1, 'd')]\"\n },\n {\n \"code\": null,\n \"e\": 2598,\n \"s\": 2512,\n \"text\": \"This gives you a hint of how to create a graph above using lambda. Ready? Here it is:\"\n },\n {\n \"code\": null,\n \"e\": 2706,\n \"s\": 2598,\n \"text\": \"import matplotlib.pyplot as pltnums = plt.plot(sorted(range(-100, 101), key=lambda x: x * x))plt.plot(nums)\"\n },\n {\n \"code\": null,\n \"e\": 2725,\n \"s\": 2706,\n \"text\": \"Give us the graph:\"\n },\n {\n \"code\": null,\n \"e\": 2869,\n \"s\": 2725,\n \"text\": \"Want to create a function that could take a number to the power of another number? We could use the combination of nested function and lambda :\"\n },\n {\n \"code\": null,\n \"e\": 2952,\n \"s\": 2869,\n \"text\": \"def power(n): return lambda x: x**npower_3 = power(3)list(map(power_3,[2,3,4,5]))\"\n },\n {\n \"code\": null,\n \"e\": 2961,\n \"s\": 2952,\n \"text\": \"Outcome:\"\n },\n {\n \"code\": null,\n \"e\": 2978,\n \"s\": 2961,\n \"text\": \"[8, 27, 64, 125]\"\n },\n {\n \"code\": null,\n \"e\": 3025,\n \"s\": 2978,\n \"text\": \"Or use lambda to manipulate a pandas Dataframe\"\n },\n {\n \"code\": null,\n \"e\": 3104,\n \"s\": 3025,\n \"text\": \"import pandas as pddf = pd.DataFrame([[1,2,3],[4,5,6]], columns=['a','b','c'])\"\n },\n {\n \"code\": null,\n \"e\": 3231,\n \"s\": 3104,\n \"text\": \"#Create the 4th column that is the sum of the other 3 columnsdf['d'] = df.apply(lambda row: row['a']+row['b']+row['c'],axis=1)\"\n },\n {\n \"code\": null,\n \"e\": 3546,\n \"s\": 3231,\n \"text\": \"Congratulation! Now you know how to use lambda as a shortcut for functions. I encourage you to use lambda when you want to create a nameless function for a short period of time. You could also use it as an argument to a higher-order function like above or used along with functions like filter() , map() , apply().\"\n },\n {\n \"code\": null,\n \"e\": 3826,\n \"s\": 3546,\n \"text\": \"I hope this tutorial will give you some motivations and reasons to switch some of your Python code with lambda . A small change in your code could give you a big return in time and efficiency as you increasingly incorporate more sophisticated code for your data science projects.\"\n },\n {\n \"code\": null,\n \"e\": 3905,\n \"s\": 3826,\n \"text\": \"Feel free to fork and play with the code for this article in this Github repo.\"\n },\n {\n \"code\": null,\n \"e\": 4065,\n \"s\": 3905,\n \"text\": \"I like to write about basic data science concepts and play with different algorithms and data science tools. You could connect with me on LinkedIn and Twitter.\"\n }\n]"}}},{"rowIdx":549,"cells":{"title":{"kind":"string","value":"Python 3 - String rstrip() Method"},"text":{"kind":"string","value":"The rstrip() method returns a copy of the string in which all chars have been stripped from the end of the string (default whitespace characters).\nFollowing is the syntax for rstrip() method −\nstr.rstrip([chars])\n\nchars − You can supply what chars have to be trimmed.\nThis method returns a copy of the string in which all chars have been stripped from the end of the string (default whitespace characters).\nThe following example shows the usage of rstrip() method.\n#!/usr/bin/python3\n\nstr = \" this is string example....wow!!! \"\nprint (str.rstrip())\n\nstr = \"*****this is string example....wow!!!*****\"\nprint (str.rstrip('*'))\nWhen we run above program, it produces the following result −\n this is string example....wow!!!\n*****this is string example....wow!!!\n\n\n 187 Lectures \n 17.5 hours \n\n Malhar Lathkar\n\n 55 Lectures \n 8 hours \n\n Arnab Chakraborty\n\n 136 Lectures \n 11 hours \n\n In28Minutes Official\n\n 75 Lectures \n 13 hours \n\n Eduonix Learning Solutions\n\n 70 Lectures \n 8.5 hours \n\n Lets Kode It\n\n 63 Lectures \n 6 hours \n\n Abhilash Nelson\n Print\n Add Notes\n Bookmark this page"},"parsed":{"kind":"list like","value":[{"code":null,"e":2487,"s":2340,"text":"The rstrip() method returns a copy of the string in which all chars have been stripped from the end of the string (default whitespace characters)."},{"code":null,"e":2533,"s":2487,"text":"Following is the syntax for rstrip() method −"},{"code":null,"e":2554,"s":2533,"text":"str.rstrip([chars])\n"},{"code":null,"e":2608,"s":2554,"text":"chars − You can supply what chars have to be trimmed."},{"code":null,"e":2747,"s":2608,"text":"This method returns a copy of the string in which all chars have been stripped from the end of the string (default whitespace characters)."},{"code":null,"e":2805,"s":2747,"text":"The following example shows the usage of rstrip() method."},{"code":null,"e":2973,"s":2805,"text":"#!/usr/bin/python3\n\nstr = \" this is string example....wow!!! \"\nprint (str.rstrip())\n\nstr = \"*****this is string example....wow!!!*****\"\nprint (str.rstrip('*'))"},{"code":null,"e":3035,"s":2973,"text":"When we run above program, it produces the following result −"},{"code":null,"e":3112,"s":3035,"text":" this is string example....wow!!!\n*****this is string example....wow!!!\n"},{"code":null,"e":3149,"s":3112,"text":"\n 187 Lectures \n 17.5 hours \n"},{"code":null,"e":3165,"s":3149,"text":" Malhar Lathkar"},{"code":null,"e":3198,"s":3165,"text":"\n 55 Lectures \n 8 hours \n"},{"code":null,"e":3217,"s":3198,"text":" Arnab Chakraborty"},{"code":null,"e":3252,"s":3217,"text":"\n 136 Lectures \n 11 hours \n"},{"code":null,"e":3274,"s":3252,"text":" In28Minutes Official"},{"code":null,"e":3308,"s":3274,"text":"\n 75 Lectures \n 13 hours \n"},{"code":null,"e":3336,"s":3308,"text":" Eduonix Learning Solutions"},{"code":null,"e":3371,"s":3336,"text":"\n 70 Lectures \n 8.5 hours \n"},{"code":null,"e":3385,"s":3371,"text":" Lets Kode It"},{"code":null,"e":3418,"s":3385,"text":"\n 63 Lectures \n 6 hours \n"},{"code":null,"e":3435,"s":3418,"text":" Abhilash Nelson"},{"code":null,"e":3442,"s":3435,"text":" Print"},{"code":null,"e":3453,"s":3442,"text":" Add Notes"}],"string":"[\n {\n \"code\": null,\n \"e\": 2487,\n \"s\": 2340,\n \"text\": \"The rstrip() method returns a copy of the string in which all chars have been stripped from the end of the string (default whitespace characters).\"\n },\n {\n \"code\": null,\n \"e\": 2533,\n \"s\": 2487,\n \"text\": \"Following is the syntax for rstrip() method −\"\n },\n {\n \"code\": null,\n \"e\": 2554,\n \"s\": 2533,\n \"text\": \"str.rstrip([chars])\\n\"\n },\n {\n \"code\": null,\n \"e\": 2608,\n \"s\": 2554,\n \"text\": \"chars − You can supply what chars have to be trimmed.\"\n },\n {\n \"code\": null,\n \"e\": 2747,\n \"s\": 2608,\n \"text\": \"This method returns a copy of the string in which all chars have been stripped from the end of the string (default whitespace characters).\"\n },\n {\n \"code\": null,\n \"e\": 2805,\n \"s\": 2747,\n \"text\": \"The following example shows the usage of rstrip() method.\"\n },\n {\n \"code\": null,\n \"e\": 2973,\n \"s\": 2805,\n \"text\": \"#!/usr/bin/python3\\n\\nstr = \\\" this is string example....wow!!! \\\"\\nprint (str.rstrip())\\n\\nstr = \\\"*****this is string example....wow!!!*****\\\"\\nprint (str.rstrip('*'))\"\n },\n {\n \"code\": null,\n \"e\": 3035,\n \"s\": 2973,\n \"text\": \"When we run above program, it produces the following result −\"\n },\n {\n \"code\": null,\n \"e\": 3112,\n \"s\": 3035,\n \"text\": \" this is string example....wow!!!\\n*****this is string example....wow!!!\\n\"\n },\n {\n \"code\": null,\n \"e\": 3149,\n \"s\": 3112,\n \"text\": \"\\n 187 Lectures \\n 17.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 3165,\n \"s\": 3149,\n \"text\": \" Malhar Lathkar\"\n },\n {\n \"code\": null,\n \"e\": 3198,\n \"s\": 3165,\n \"text\": \"\\n 55 Lectures \\n 8 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 3217,\n \"s\": 3198,\n \"text\": \" Arnab Chakraborty\"\n },\n {\n \"code\": null,\n \"e\": 3252,\n \"s\": 3217,\n \"text\": \"\\n 136 Lectures \\n 11 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 3274,\n \"s\": 3252,\n \"text\": \" In28Minutes Official\"\n },\n {\n \"code\": null,\n \"e\": 3308,\n \"s\": 3274,\n \"text\": \"\\n 75 Lectures \\n 13 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 3336,\n \"s\": 3308,\n \"text\": \" Eduonix Learning Solutions\"\n },\n {\n \"code\": null,\n \"e\": 3371,\n \"s\": 3336,\n \"text\": \"\\n 70 Lectures \\n 8.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 3385,\n \"s\": 3371,\n \"text\": \" Lets Kode It\"\n },\n {\n \"code\": null,\n \"e\": 3418,\n \"s\": 3385,\n \"text\": \"\\n 63 Lectures \\n 6 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 3435,\n \"s\": 3418,\n \"text\": \" Abhilash Nelson\"\n },\n {\n \"code\": null,\n \"e\": 3442,\n \"s\": 3435,\n \"text\": \" Print\"\n },\n {\n \"code\": null,\n \"e\": 3453,\n \"s\": 3442,\n \"text\": \" Add Notes\"\n }\n]"}}},{"rowIdx":550,"cells":{"title":{"kind":"string","value":"MongoDB query to search date records using only Month and Day"},"text":{"kind":"string","value":"To search using month and day only, use $where. Let us create a collection with documents −\n> db.demo181.insertOne({\"ShippingDate\":new ISODate(\"2020-01-10\")});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e398a699e4f06af551997fe\")\n}\n> db.demo181.insertOne({\"ShippingDate\":new ISODate(\"2019-12-11\")});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e398a729e4f06af551997ff\")\n}\n> db.demo181.insertOne({\"ShippingDate\":new ISODate(\"2018-01-10\")});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e398a7d9e4f06af55199800\")\n}\n> db.demo181.insertOne({\"ShippingDate\":new ISODate(\"2020-10-12\")});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e398a879e4f06af55199801\")\n}\nDisplay all documents from a collection with the help of find() method −\n> db.demo181.find();\nThis will produce the following output −\n{ \"_id\" : ObjectId(\"5e398a699e4f06af551997fe\"), \"ShippingDate\" : ISODate(\"2020-01-10T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"5e398a729e4f06af551997ff\"), \"ShippingDate\" : ISODate(\"2019-12-11T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"5e398a7d9e4f06af55199800\"), \"ShippingDate\" : ISODate(\"2018-01-10T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"5e398a879e4f06af55199801\"), \"ShippingDate\" : ISODate(\"2020-10-12T00:00:00Z\") }\nFollowing is the query to search with Month and Day −\n> db.demo181.find({$where : function() { return this.ShippingDate.getMonth() == 1 || this.ShippingDate.getDate() == 10} })\nThis will produce the following output −\n{ \"_id\" : ObjectId(\"5e398a699e4f06af551997fe\"), \"ShippingDate\" : ISODate(\"2020-01-10T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"5e398a7d9e4f06af55199800\"), \"ShippingDate\" : ISODate(\"2018-01-10T00:00:00Z\") }"},"parsed":{"kind":"list like","value":[{"code":null,"e":1154,"s":1062,"text":"To search using month and day only, use $where. Let us create a collection with documents −"},{"code":null,"e":1766,"s":1154,"text":"> db.demo181.insertOne({\"ShippingDate\":new ISODate(\"2020-01-10\")});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e398a699e4f06af551997fe\")\n}\n> db.demo181.insertOne({\"ShippingDate\":new ISODate(\"2019-12-11\")});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e398a729e4f06af551997ff\")\n}\n> db.demo181.insertOne({\"ShippingDate\":new ISODate(\"2018-01-10\")});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e398a7d9e4f06af55199800\")\n}\n> db.demo181.insertOne({\"ShippingDate\":new ISODate(\"2020-10-12\")});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e398a879e4f06af55199801\")\n}"},{"code":null,"e":1839,"s":1766,"text":"Display all documents from a collection with the help of find() method −"},{"code":null,"e":1860,"s":1839,"text":"> db.demo181.find();"},{"code":null,"e":1901,"s":1860,"text":"This will produce the following output −"},{"code":null,"e":2297,"s":1901,"text":"{ \"_id\" : ObjectId(\"5e398a699e4f06af551997fe\"), \"ShippingDate\" : ISODate(\"2020-01-10T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"5e398a729e4f06af551997ff\"), \"ShippingDate\" : ISODate(\"2019-12-11T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"5e398a7d9e4f06af55199800\"), \"ShippingDate\" : ISODate(\"2018-01-10T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"5e398a879e4f06af55199801\"), \"ShippingDate\" : ISODate(\"2020-10-12T00:00:00Z\") }"},{"code":null,"e":2351,"s":2297,"text":"Following is the query to search with Month and Day −"},{"code":null,"e":2474,"s":2351,"text":"> db.demo181.find({$where : function() { return this.ShippingDate.getMonth() == 1 || this.ShippingDate.getDate() == 10} })"},{"code":null,"e":2515,"s":2474,"text":"This will produce the following output −"},{"code":null,"e":2713,"s":2515,"text":"{ \"_id\" : ObjectId(\"5e398a699e4f06af551997fe\"), \"ShippingDate\" : ISODate(\"2020-01-10T00:00:00Z\") }\n{ \"_id\" : ObjectId(\"5e398a7d9e4f06af55199800\"), \"ShippingDate\" : ISODate(\"2018-01-10T00:00:00Z\") }"}],"string":"[\n {\n \"code\": null,\n \"e\": 1154,\n \"s\": 1062,\n \"text\": \"To search using month and day only, use $where. Let us create a collection with documents −\"\n },\n {\n \"code\": null,\n \"e\": 1766,\n \"s\": 1154,\n \"text\": \"> db.demo181.insertOne({\\\"ShippingDate\\\":new ISODate(\\\"2020-01-10\\\")});\\n{\\n \\\"acknowledged\\\" : true,\\n \\\"insertedId\\\" : ObjectId(\\\"5e398a699e4f06af551997fe\\\")\\n}\\n> db.demo181.insertOne({\\\"ShippingDate\\\":new ISODate(\\\"2019-12-11\\\")});\\n{\\n \\\"acknowledged\\\" : true,\\n \\\"insertedId\\\" : ObjectId(\\\"5e398a729e4f06af551997ff\\\")\\n}\\n> db.demo181.insertOne({\\\"ShippingDate\\\":new ISODate(\\\"2018-01-10\\\")});\\n{\\n \\\"acknowledged\\\" : true,\\n \\\"insertedId\\\" : ObjectId(\\\"5e398a7d9e4f06af55199800\\\")\\n}\\n> db.demo181.insertOne({\\\"ShippingDate\\\":new ISODate(\\\"2020-10-12\\\")});\\n{\\n \\\"acknowledged\\\" : true,\\n \\\"insertedId\\\" : ObjectId(\\\"5e398a879e4f06af55199801\\\")\\n}\"\n },\n {\n \"code\": null,\n \"e\": 1839,\n \"s\": 1766,\n \"text\": \"Display all documents from a collection with the help of find() method −\"\n },\n {\n \"code\": null,\n \"e\": 1860,\n \"s\": 1839,\n \"text\": \"> db.demo181.find();\"\n },\n {\n \"code\": null,\n \"e\": 1901,\n \"s\": 1860,\n \"text\": \"This will produce the following output −\"\n },\n {\n \"code\": null,\n \"e\": 2297,\n \"s\": 1901,\n \"text\": \"{ \\\"_id\\\" : ObjectId(\\\"5e398a699e4f06af551997fe\\\"), \\\"ShippingDate\\\" : ISODate(\\\"2020-01-10T00:00:00Z\\\") }\\n{ \\\"_id\\\" : ObjectId(\\\"5e398a729e4f06af551997ff\\\"), \\\"ShippingDate\\\" : ISODate(\\\"2019-12-11T00:00:00Z\\\") }\\n{ \\\"_id\\\" : ObjectId(\\\"5e398a7d9e4f06af55199800\\\"), \\\"ShippingDate\\\" : ISODate(\\\"2018-01-10T00:00:00Z\\\") }\\n{ \\\"_id\\\" : ObjectId(\\\"5e398a879e4f06af55199801\\\"), \\\"ShippingDate\\\" : ISODate(\\\"2020-10-12T00:00:00Z\\\") }\"\n },\n {\n \"code\": null,\n \"e\": 2351,\n \"s\": 2297,\n \"text\": \"Following is the query to search with Month and Day −\"\n },\n {\n \"code\": null,\n \"e\": 2474,\n \"s\": 2351,\n \"text\": \"> db.demo181.find({$where : function() { return this.ShippingDate.getMonth() == 1 || this.ShippingDate.getDate() == 10} })\"\n },\n {\n \"code\": null,\n \"e\": 2515,\n \"s\": 2474,\n \"text\": \"This will produce the following output −\"\n },\n {\n \"code\": null,\n \"e\": 2713,\n \"s\": 2515,\n \"text\": \"{ \\\"_id\\\" : ObjectId(\\\"5e398a699e4f06af551997fe\\\"), \\\"ShippingDate\\\" : ISODate(\\\"2020-01-10T00:00:00Z\\\") }\\n{ \\\"_id\\\" : ObjectId(\\\"5e398a7d9e4f06af55199800\\\"), \\\"ShippingDate\\\" : ISODate(\\\"2018-01-10T00:00:00Z\\\") }\"\n }\n]"}}},{"rowIdx":551,"cells":{"title":{"kind":"string","value":"Calculate total time duration (add time) in MySQL?"},"text":{"kind":"string","value":"To calculate the total time duration in MySQL, you need to use SEC_TO_TIME(). Let us see an example by creating a table\nmysql> create table AddTotalTimeDemo\n - > (\n - > Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n - > LoginTime time\n - > );\nQuery OK, 0 rows affected (0.63 sec)\nInsert some records in the table using insert command.\nThe query is as follows\nmysql> insert into AddTotalTimeDemo(LoginTime) values('05:05:00');\nQuery OK, 1 row affected (0.10 sec)\nmysql> insert into AddTotalTimeDemo(LoginTime) values('07:20:00');\nQuery OK, 1 row affected (0.16 sec)\nmysql> insert into AddTotalTimeDemo(LoginTime) values('02:05:00');\nQuery OK, 1 row affected (0.17 sec)\nmysql> insert into AddTotalTimeDemo(LoginTime) values('03:03:00');\nQuery OK, 1 row affected (0.25 sec)\nmysql> insert into AddTotalTimeDemo(LoginTime) values('05:07:00');\nQuery OK, 1 row affected (0.11 sec)\nDisplay all records from the table using select statement.\nThe query is as follows\nmysql> select *from AddTotalTimeDemo;\nThe following is the output\n+----+-----------+\n| Id | LoginTime |\n+----+-----------+\n| 1 | 05:05:00 |\n| 2 | 07:20:00 |\n| 3 | 02:05:00 |\n| 4 | 03:03:00 |\n| 5 | 05:07:00 |\n+----+-----------+\n5 rows in set (0.00 sec)\nThe following is the query to calculate the total time duration in MySQL\nmysql> SELECT SEC_TO_TIME(SUM(TIME_TO_SEC(LoginTime))) AS TotalTime from AddTotalTimeDemo;\nThe following is the output\n+-----------+\n| TotalTime |\n+-----------+\n| 22:40:00 |\n+-----------+\n1 row in set (0.00 sec)"},"parsed":{"kind":"list like","value":[{"code":null,"e":1182,"s":1062,"text":"To calculate the total time duration in MySQL, you need to use SEC_TO_TIME(). Let us see an example by creating a table"},{"code":null,"e":1348,"s":1182,"text":"mysql> create table AddTotalTimeDemo\n - > (\n - > Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n - > LoginTime time\n - > );\nQuery OK, 0 rows affected (0.63 sec)"},{"code":null,"e":1403,"s":1348,"text":"Insert some records in the table using insert command."},{"code":null,"e":1427,"s":1403,"text":"The query is as follows"},{"code":null,"e":1942,"s":1427,"text":"mysql> insert into AddTotalTimeDemo(LoginTime) values('05:05:00');\nQuery OK, 1 row affected (0.10 sec)\nmysql> insert into AddTotalTimeDemo(LoginTime) values('07:20:00');\nQuery OK, 1 row affected (0.16 sec)\nmysql> insert into AddTotalTimeDemo(LoginTime) values('02:05:00');\nQuery OK, 1 row affected (0.17 sec)\nmysql> insert into AddTotalTimeDemo(LoginTime) values('03:03:00');\nQuery OK, 1 row affected (0.25 sec)\nmysql> insert into AddTotalTimeDemo(LoginTime) values('05:07:00');\nQuery OK, 1 row affected (0.11 sec)"},{"code":null,"e":2001,"s":1942,"text":"Display all records from the table using select statement."},{"code":null,"e":2025,"s":2001,"text":"The query is as follows"},{"code":null,"e":2063,"s":2025,"text":"mysql> select *from AddTotalTimeDemo;"},{"code":null,"e":2091,"s":2063,"text":"The following is the output"},{"code":null,"e":2287,"s":2091,"text":"+----+-----------+\n| Id | LoginTime |\n+----+-----------+\n| 1 | 05:05:00 |\n| 2 | 07:20:00 |\n| 3 | 02:05:00 |\n| 4 | 03:03:00 |\n| 5 | 05:07:00 |\n+----+-----------+\n5 rows in set (0.00 sec)"},{"code":null,"e":2360,"s":2287,"text":"The following is the query to calculate the total time duration in MySQL"},{"code":null,"e":2451,"s":2360,"text":"mysql> SELECT SEC_TO_TIME(SUM(TIME_TO_SEC(LoginTime))) AS TotalTime from AddTotalTimeDemo;"},{"code":null,"e":2479,"s":2451,"text":"The following is the output"},{"code":null,"e":2573,"s":2479,"text":"+-----------+\n| TotalTime |\n+-----------+\n| 22:40:00 |\n+-----------+\n1 row in set (0.00 sec)"}],"string":"[\n {\n \"code\": null,\n \"e\": 1182,\n \"s\": 1062,\n \"text\": \"To calculate the total time duration in MySQL, you need to use SEC_TO_TIME(). Let us see an example by creating a table\"\n },\n {\n \"code\": null,\n \"e\": 1348,\n \"s\": 1182,\n \"text\": \"mysql> create table AddTotalTimeDemo\\n - > (\\n - > Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,\\n - > LoginTime time\\n - > );\\nQuery OK, 0 rows affected (0.63 sec)\"\n },\n {\n \"code\": null,\n \"e\": 1403,\n \"s\": 1348,\n \"text\": \"Insert some records in the table using insert command.\"\n },\n {\n \"code\": null,\n \"e\": 1427,\n \"s\": 1403,\n \"text\": \"The query is as follows\"\n },\n {\n \"code\": null,\n \"e\": 1942,\n \"s\": 1427,\n \"text\": \"mysql> insert into AddTotalTimeDemo(LoginTime) values('05:05:00');\\nQuery OK, 1 row affected (0.10 sec)\\nmysql> insert into AddTotalTimeDemo(LoginTime) values('07:20:00');\\nQuery OK, 1 row affected (0.16 sec)\\nmysql> insert into AddTotalTimeDemo(LoginTime) values('02:05:00');\\nQuery OK, 1 row affected (0.17 sec)\\nmysql> insert into AddTotalTimeDemo(LoginTime) values('03:03:00');\\nQuery OK, 1 row affected (0.25 sec)\\nmysql> insert into AddTotalTimeDemo(LoginTime) values('05:07:00');\\nQuery OK, 1 row affected (0.11 sec)\"\n },\n {\n \"code\": null,\n \"e\": 2001,\n \"s\": 1942,\n \"text\": \"Display all records from the table using select statement.\"\n },\n {\n \"code\": null,\n \"e\": 2025,\n \"s\": 2001,\n \"text\": \"The query is as follows\"\n },\n {\n \"code\": null,\n \"e\": 2063,\n \"s\": 2025,\n \"text\": \"mysql> select *from AddTotalTimeDemo;\"\n },\n {\n \"code\": null,\n \"e\": 2091,\n \"s\": 2063,\n \"text\": \"The following is the output\"\n },\n {\n \"code\": null,\n \"e\": 2287,\n \"s\": 2091,\n \"text\": \"+----+-----------+\\n| Id | LoginTime |\\n+----+-----------+\\n| 1 | 05:05:00 |\\n| 2 | 07:20:00 |\\n| 3 | 02:05:00 |\\n| 4 | 03:03:00 |\\n| 5 | 05:07:00 |\\n+----+-----------+\\n5 rows in set (0.00 sec)\"\n },\n {\n \"code\": null,\n \"e\": 2360,\n \"s\": 2287,\n \"text\": \"The following is the query to calculate the total time duration in MySQL\"\n },\n {\n \"code\": null,\n \"e\": 2451,\n \"s\": 2360,\n \"text\": \"mysql> SELECT SEC_TO_TIME(SUM(TIME_TO_SEC(LoginTime))) AS TotalTime from AddTotalTimeDemo;\"\n },\n {\n \"code\": null,\n \"e\": 2479,\n \"s\": 2451,\n \"text\": \"The following is the output\"\n },\n {\n \"code\": null,\n \"e\": 2573,\n \"s\": 2479,\n \"text\": \"+-----------+\\n| TotalTime |\\n+-----------+\\n| 22:40:00 |\\n+-----------+\\n1 row in set (0.00 sec)\"\n }\n]"}}},{"rowIdx":552,"cells":{"title":{"kind":"string","value":"Fortran - Pointers"},"text":{"kind":"string","value":"In most programming languages, a pointer variable stores the memory address of an object. However, in Fortran, a pointer is a data object that has more functionalities than just storing the memory address. It contains more information about a particular object, like type, rank, extents, and memory address.\nA pointer is associated with a target by allocation or pointer assignment.\nA pointer variable is declared with the pointer attribute.\nThe following examples shows declaration of pointer variables −\ninteger, pointer :: p1 ! pointer to integer \nreal, pointer, dimension (:) :: pra ! pointer to 1-dim real array \nreal, pointer, dimension (:,:) :: pra2 ! pointer to 2-dim real array\nA pointer can point to −\nAn area of dynamically allocated memory.\nAn area of dynamically allocated memory.\nA data object of the same type as the pointer, with the target attribute.\nA data object of the same type as the pointer, with the target attribute.\nThe allocate statement allows you to allocate space for a pointer object. For example −\nprogram pointerExample\nimplicit none\n\n integer, pointer :: p1\n allocate(p1)\n \n p1 = 1\n Print *, p1\n \n p1 = p1 + 4\n Print *, p1\n \nend program pointerExample\nWhen the above code is compiled and executed, it produces the following result −\n1\n5\n\nYou should empty the allocated storage space by the deallocate statement when it is no longer required and avoid accumulation of unused and unusable memory space.\nA target is another normal variable, with space set aside for it. A target variable must be declared with the target attribute.\nYou associate a pointer variable with a target variable using the association operator (=>).\nLet us rewrite the previous example, to demonstrate the concept −\nprogram pointerExample\nimplicit none\n\n integer, pointer :: p1\n integer, target :: t1 \n \n p1=>t1\n p1 = 1\n \n Print *, p1\n Print *, t1\n \n p1 = p1 + 4\n \n Print *, p1\n Print *, t1\n \n t1 = 8\n \n Print *, p1\n Print *, t1\n \nend program pointerExample\nWhen the above code is compiled and executed, it produces the following result −\n1\n1\n5\n5\n8\n8\n\nA pointer can be −\nUndefined\nAssociated\nDisassociated\nIn the above program, we have associated the pointer p1, with the target t1, using the => operator. The function associated, tests a pointer’s association status.\nThe nullify statement disassociates a pointer from a target.\nNullify does not empty the targets as there could be more than one pointer pointing to the same target. However, emptying the pointer implies nullification also.\nThe following example demonstrates the concepts −\nprogram pointerExample\nimplicit none\n\n integer, pointer :: p1\n integer, target :: t1 \n integer, target :: t2\n \n p1=>t1\n p1 = 1\n \n Print *, p1\n Print *, t1\n \n p1 = p1 + 4\n Print *, p1\n Print *, t1\n \n t1 = 8\n Print *, p1\n Print *, t1\n \n nullify(p1)\n Print *, t1\n \n p1=>t2\n Print *, associated(p1)\n Print*, associated(p1, t1)\n Print*, associated(p1, t2)\n \n !what is the value of p1 at present\n Print *, p1\n Print *, t2\n \n p1 = 10\n Print *, p1\n Print *, t2\n \nend program pointerExample\nWhen the above code is compiled and executed, it produces the following result −\n1\n1\n5\n5\n8\n8\n8\nT\nF\nT\n0\n0\n10\n10\n\nPlease note that each time you run the code, the memory addresses will be different.\nprogram pointerExample\nimplicit none\n\n integer, pointer :: a, b\n integer, target :: t\n integer :: n\n \n t = 1\n a => t\n t = 2\n b => t\n n = a + b\n \n Print *, a, b, t, n \n \nend program pointerExample\nWhen the above code is compiled and executed, it produces the following result −\n2 2 2 4\n\n Print\n Add Notes\n Bookmark this page"},"parsed":{"kind":"list like","value":[{"code":null,"e":2454,"s":2146,"text":"In most programming languages, a pointer variable stores the memory address of an object. However, in Fortran, a pointer is a data object that has more functionalities than just storing the memory address. It contains more information about a particular object, like type, rank, extents, and memory address."},{"code":null,"e":2529,"s":2454,"text":"A pointer is associated with a target by allocation or pointer assignment."},{"code":null,"e":2588,"s":2529,"text":"A pointer variable is declared with the pointer attribute."},{"code":null,"e":2652,"s":2588,"text":"The following examples shows declaration of pointer variables −"},{"code":null,"e":2835,"s":2652,"text":"integer, pointer :: p1 ! pointer to integer \nreal, pointer, dimension (:) :: pra ! pointer to 1-dim real array \nreal, pointer, dimension (:,:) :: pra2 ! pointer to 2-dim real array"},{"code":null,"e":2860,"s":2835,"text":"A pointer can point to −"},{"code":null,"e":2901,"s":2860,"text":"An area of dynamically allocated memory."},{"code":null,"e":2942,"s":2901,"text":"An area of dynamically allocated memory."},{"code":null,"e":3016,"s":2942,"text":"A data object of the same type as the pointer, with the target attribute."},{"code":null,"e":3090,"s":3016,"text":"A data object of the same type as the pointer, with the target attribute."},{"code":null,"e":3178,"s":3090,"text":"The allocate statement allows you to allocate space for a pointer object. For example −"},{"code":null,"e":3352,"s":3178,"text":"program pointerExample\nimplicit none\n\n integer, pointer :: p1\n allocate(p1)\n \n p1 = 1\n Print *, p1\n \n p1 = p1 + 4\n Print *, p1\n \nend program pointerExample"},{"code":null,"e":3433,"s":3352,"text":"When the above code is compiled and executed, it produces the following result −"},{"code":null,"e":3438,"s":3433,"text":"1\n5\n"},{"code":null,"e":3601,"s":3438,"text":"You should empty the allocated storage space by the deallocate statement when it is no longer required and avoid accumulation of unused and unusable memory space."},{"code":null,"e":3729,"s":3601,"text":"A target is another normal variable, with space set aside for it. A target variable must be declared with the target attribute."},{"code":null,"e":3822,"s":3729,"text":"You associate a pointer variable with a target variable using the association operator (=>)."},{"code":null,"e":3888,"s":3822,"text":"Let us rewrite the previous example, to demonstrate the concept −"},{"code":null,"e":4168,"s":3888,"text":"program pointerExample\nimplicit none\n\n integer, pointer :: p1\n integer, target :: t1 \n \n p1=>t1\n p1 = 1\n \n Print *, p1\n Print *, t1\n \n p1 = p1 + 4\n \n Print *, p1\n Print *, t1\n \n t1 = 8\n \n Print *, p1\n Print *, t1\n \nend program pointerExample"},{"code":null,"e":4249,"s":4168,"text":"When the above code is compiled and executed, it produces the following result −"},{"code":null,"e":4262,"s":4249,"text":"1\n1\n5\n5\n8\n8\n"},{"code":null,"e":4281,"s":4262,"text":"A pointer can be −"},{"code":null,"e":4291,"s":4281,"text":"Undefined"},{"code":null,"e":4302,"s":4291,"text":"Associated"},{"code":null,"e":4316,"s":4302,"text":"Disassociated"},{"code":null,"e":4479,"s":4316,"text":"In the above program, we have associated the pointer p1, with the target t1, using the => operator. The function associated, tests a pointer’s association status."},{"code":null,"e":4540,"s":4479,"text":"The nullify statement disassociates a pointer from a target."},{"code":null,"e":4702,"s":4540,"text":"Nullify does not empty the targets as there could be more than one pointer pointing to the same target. However, emptying the pointer implies nullification also."},{"code":null,"e":4752,"s":4702,"text":"The following example demonstrates the concepts −"},{"code":null,"e":5302,"s":4752,"text":"program pointerExample\nimplicit none\n\n integer, pointer :: p1\n integer, target :: t1 \n integer, target :: t2\n \n p1=>t1\n p1 = 1\n \n Print *, p1\n Print *, t1\n \n p1 = p1 + 4\n Print *, p1\n Print *, t1\n \n t1 = 8\n Print *, p1\n Print *, t1\n \n nullify(p1)\n Print *, t1\n \n p1=>t2\n Print *, associated(p1)\n Print*, associated(p1, t1)\n Print*, associated(p1, t2)\n \n !what is the value of p1 at present\n Print *, p1\n Print *, t2\n \n p1 = 10\n Print *, p1\n Print *, t2\n \nend program pointerExample"},{"code":null,"e":5383,"s":5302,"text":"When the above code is compiled and executed, it produces the following result −"},{"code":null,"e":5414,"s":5383,"text":"1\n1\n5\n5\n8\n8\n8\nT\nF\nT\n0\n0\n10\n10\n"},{"code":null,"e":5499,"s":5414,"text":"Please note that each time you run the code, the memory addresses will be different."},{"code":null,"e":5719,"s":5499,"text":"program pointerExample\nimplicit none\n\n integer, pointer :: a, b\n integer, target :: t\n integer :: n\n \n t = 1\n a => t\n t = 2\n b => t\n n = a + b\n \n Print *, a, b, t, n \n \nend program pointerExample"},{"code":null,"e":5800,"s":5719,"text":"When the above code is compiled and executed, it produces the following result −"},{"code":null,"e":5812,"s":5800,"text":"2 2 2 4\n"},{"code":null,"e":5819,"s":5812,"text":" Print"},{"code":null,"e":5830,"s":5819,"text":" Add Notes"}],"string":"[\n {\n \"code\": null,\n \"e\": 2454,\n \"s\": 2146,\n \"text\": \"In most programming languages, a pointer variable stores the memory address of an object. However, in Fortran, a pointer is a data object that has more functionalities than just storing the memory address. It contains more information about a particular object, like type, rank, extents, and memory address.\"\n },\n {\n \"code\": null,\n \"e\": 2529,\n \"s\": 2454,\n \"text\": \"A pointer is associated with a target by allocation or pointer assignment.\"\n },\n {\n \"code\": null,\n \"e\": 2588,\n \"s\": 2529,\n \"text\": \"A pointer variable is declared with the pointer attribute.\"\n },\n {\n \"code\": null,\n \"e\": 2652,\n \"s\": 2588,\n \"text\": \"The following examples shows declaration of pointer variables −\"\n },\n {\n \"code\": null,\n \"e\": 2835,\n \"s\": 2652,\n \"text\": \"integer, pointer :: p1 ! pointer to integer \\nreal, pointer, dimension (:) :: pra ! pointer to 1-dim real array \\nreal, pointer, dimension (:,:) :: pra2 ! pointer to 2-dim real array\"\n },\n {\n \"code\": null,\n \"e\": 2860,\n \"s\": 2835,\n \"text\": \"A pointer can point to −\"\n },\n {\n \"code\": null,\n \"e\": 2901,\n \"s\": 2860,\n \"text\": \"An area of dynamically allocated memory.\"\n },\n {\n \"code\": null,\n \"e\": 2942,\n \"s\": 2901,\n \"text\": \"An area of dynamically allocated memory.\"\n },\n {\n \"code\": null,\n \"e\": 3016,\n \"s\": 2942,\n \"text\": \"A data object of the same type as the pointer, with the target attribute.\"\n },\n {\n \"code\": null,\n \"e\": 3090,\n \"s\": 3016,\n \"text\": \"A data object of the same type as the pointer, with the target attribute.\"\n },\n {\n \"code\": null,\n \"e\": 3178,\n \"s\": 3090,\n \"text\": \"The allocate statement allows you to allocate space for a pointer object. For example −\"\n },\n {\n \"code\": null,\n \"e\": 3352,\n \"s\": 3178,\n \"text\": \"program pointerExample\\nimplicit none\\n\\n integer, pointer :: p1\\n allocate(p1)\\n \\n p1 = 1\\n Print *, p1\\n \\n p1 = p1 + 4\\n Print *, p1\\n \\nend program pointerExample\"\n },\n {\n \"code\": null,\n \"e\": 3433,\n \"s\": 3352,\n \"text\": \"When the above code is compiled and executed, it produces the following result −\"\n },\n {\n \"code\": null,\n \"e\": 3438,\n \"s\": 3433,\n \"text\": \"1\\n5\\n\"\n },\n {\n \"code\": null,\n \"e\": 3601,\n \"s\": 3438,\n \"text\": \"You should empty the allocated storage space by the deallocate statement when it is no longer required and avoid accumulation of unused and unusable memory space.\"\n },\n {\n \"code\": null,\n \"e\": 3729,\n \"s\": 3601,\n \"text\": \"A target is another normal variable, with space set aside for it. A target variable must be declared with the target attribute.\"\n },\n {\n \"code\": null,\n \"e\": 3822,\n \"s\": 3729,\n \"text\": \"You associate a pointer variable with a target variable using the association operator (=>).\"\n },\n {\n \"code\": null,\n \"e\": 3888,\n \"s\": 3822,\n \"text\": \"Let us rewrite the previous example, to demonstrate the concept −\"\n },\n {\n \"code\": null,\n \"e\": 4168,\n \"s\": 3888,\n \"text\": \"program pointerExample\\nimplicit none\\n\\n integer, pointer :: p1\\n integer, target :: t1 \\n \\n p1=>t1\\n p1 = 1\\n \\n Print *, p1\\n Print *, t1\\n \\n p1 = p1 + 4\\n \\n Print *, p1\\n Print *, t1\\n \\n t1 = 8\\n \\n Print *, p1\\n Print *, t1\\n \\nend program pointerExample\"\n },\n {\n \"code\": null,\n \"e\": 4249,\n \"s\": 4168,\n \"text\": \"When the above code is compiled and executed, it produces the following result −\"\n },\n {\n \"code\": null,\n \"e\": 4262,\n \"s\": 4249,\n \"text\": \"1\\n1\\n5\\n5\\n8\\n8\\n\"\n },\n {\n \"code\": null,\n \"e\": 4281,\n \"s\": 4262,\n \"text\": \"A pointer can be −\"\n },\n {\n \"code\": null,\n \"e\": 4291,\n \"s\": 4281,\n \"text\": \"Undefined\"\n },\n {\n \"code\": null,\n \"e\": 4302,\n \"s\": 4291,\n \"text\": \"Associated\"\n },\n {\n \"code\": null,\n \"e\": 4316,\n \"s\": 4302,\n \"text\": \"Disassociated\"\n },\n {\n \"code\": null,\n \"e\": 4479,\n \"s\": 4316,\n \"text\": \"In the above program, we have associated the pointer p1, with the target t1, using the => operator. The function associated, tests a pointer’s association status.\"\n },\n {\n \"code\": null,\n \"e\": 4540,\n \"s\": 4479,\n \"text\": \"The nullify statement disassociates a pointer from a target.\"\n },\n {\n \"code\": null,\n \"e\": 4702,\n \"s\": 4540,\n \"text\": \"Nullify does not empty the targets as there could be more than one pointer pointing to the same target. However, emptying the pointer implies nullification also.\"\n },\n {\n \"code\": null,\n \"e\": 4752,\n \"s\": 4702,\n \"text\": \"The following example demonstrates the concepts −\"\n },\n {\n \"code\": null,\n \"e\": 5302,\n \"s\": 4752,\n \"text\": \"program pointerExample\\nimplicit none\\n\\n integer, pointer :: p1\\n integer, target :: t1 \\n integer, target :: t2\\n \\n p1=>t1\\n p1 = 1\\n \\n Print *, p1\\n Print *, t1\\n \\n p1 = p1 + 4\\n Print *, p1\\n Print *, t1\\n \\n t1 = 8\\n Print *, p1\\n Print *, t1\\n \\n nullify(p1)\\n Print *, t1\\n \\n p1=>t2\\n Print *, associated(p1)\\n Print*, associated(p1, t1)\\n Print*, associated(p1, t2)\\n \\n !what is the value of p1 at present\\n Print *, p1\\n Print *, t2\\n \\n p1 = 10\\n Print *, p1\\n Print *, t2\\n \\nend program pointerExample\"\n },\n {\n \"code\": null,\n \"e\": 5383,\n \"s\": 5302,\n \"text\": \"When the above code is compiled and executed, it produces the following result −\"\n },\n {\n \"code\": null,\n \"e\": 5414,\n \"s\": 5383,\n \"text\": \"1\\n1\\n5\\n5\\n8\\n8\\n8\\nT\\nF\\nT\\n0\\n0\\n10\\n10\\n\"\n },\n {\n \"code\": null,\n \"e\": 5499,\n \"s\": 5414,\n \"text\": \"Please note that each time you run the code, the memory addresses will be different.\"\n },\n {\n \"code\": null,\n \"e\": 5719,\n \"s\": 5499,\n \"text\": \"program pointerExample\\nimplicit none\\n\\n integer, pointer :: a, b\\n integer, target :: t\\n integer :: n\\n \\n t = 1\\n a => t\\n t = 2\\n b => t\\n n = a + b\\n \\n Print *, a, b, t, n \\n \\nend program pointerExample\"\n },\n {\n \"code\": null,\n \"e\": 5800,\n \"s\": 5719,\n \"text\": \"When the above code is compiled and executed, it produces the following result −\"\n },\n {\n \"code\": null,\n \"e\": 5812,\n \"s\": 5800,\n \"text\": \"2 2 2 4\\n\"\n },\n {\n \"code\": null,\n \"e\": 5819,\n \"s\": 5812,\n \"text\": \" Print\"\n },\n {\n \"code\": null,\n \"e\": 5830,\n \"s\": 5819,\n \"text\": \" Add Notes\"\n }\n]"}}},{"rowIdx":553,"cells":{"title":{"kind":"string","value":"Optimize your CPU for Deep Learning | by Param Popat | Towards Data Science"},"text":{"kind":"string","value":"In the last few years, Deep Learning has picked up pace in academia as well as industry. Every company is now looking for an A.I. based solution to problems. This boom has its own merits and demerits, but that’s for another article, another day. This surge in Machine Learning practitioners have infiltrated the academia to its roots, and almost every student from every domain has access to AI and ML knowledge via courses, MOOCs, books, articles, and of course papers.\nThis rise was, however, bottlenecked by the availability of hardware resources. It was suggested and demonstrated that a Graphical Processing Unit is one of the best devices you can have to perform your ML tasks at pace. But a good high-performance GPU came with a price tag which can go even up to $20,449.00 for a single NVIDIA Tesla V100 32GB GPU which has server-like compute capabilities. Furthermore, a consumer laptop with a decent GPU costs around $2000 with a GPU like 1050Ti or 1080Ti. To ease the pain, Google, Kaggle, Intel, and Nvidia provides cloud-based High-Compute systems for free with a restriction on either space, compute capability, memory or time. But these online services have their drawbacks which include managing the data(upload/download), data privacy, etc. These issues lead to the main point of my article, “Why not optimize our CPUs to attain a speed-up in Deep Learning tasks?”.\nIntel has provided optimizations for Python, Tensorflow, Pytorch, etc. with a whole range of Intel Optimized support libraries like NumPy, scikit-learn and many more. These are freely available to download and set-up and provides a speed of anywhere from 2x to even 5x on a CPU like Intel Core i7 which is not also a high-performance CPU like the Xeon Series. In the remaining part of the article, I will demonstrate how to set-up Intel’s optimizations in your PC/laptop and will provide the speed-up data that I observed.\nFor a variety of experiments mentioned below, I will present the time and utilization boosts that I observed.\n10-layer Deep CNN for CIFAR-100 Image Classification.3 Layer Deep LSTM for IMDB Sentiment Analysis.6 Layer deep Dense ANN for MNIST image Classification.9 Layer deep Fully Convolutional Auto-Encoder for MNIST.\n10-layer Deep CNN for CIFAR-100 Image Classification.\n3 Layer Deep LSTM for IMDB Sentiment Analysis.\n6 Layer deep Dense ANN for MNIST image Classification.\n9 Layer deep Fully Convolutional Auto-Encoder for MNIST.\nThese tasks have been coded in Keras using tensorflow backend and datasets are available in the same hard-drive as that of the codes and executable libraries. The hard-drive utilized is an SSD.\nWe will consider six combinations of optimizations as following.\nIntel(R) Core (TM) i7.Intel(R) Xeon(R) CPU E3–1535M v6.Intel(R) Core (TM) i7 with Intel Python (Intel i7*).Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python (Intel Xeon*).Intel(R) Core (TM) i7 with Intel Python and Processor Thread optimization (Intel i7(O)).Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python and Processor Thread optimization (Intel Xeon(O)).\nIntel(R) Core (TM) i7.\nIntel(R) Xeon(R) CPU E3–1535M v6.\nIntel(R) Core (TM) i7 with Intel Python (Intel i7*).\nIntel(R) Xeon(R) CPU E3–1535M v6 with Intel Python (Intel Xeon*).\nIntel(R) Core (TM) i7 with Intel Python and Processor Thread optimization (Intel i7(O)).\nIntel(R) Xeon(R) CPU E3–1535M v6 with Intel Python and Processor Thread optimization (Intel Xeon(O)).\nFor each task, the number epochs were fixed at 50. In the chart below we can see that for an Intel(R) Core (TM) i7–7700HQ CPU @ 2.80GHz CPU, the average time per epoch is nearly 4.67 seconds, and it drops to 1.48 seconds upon proper optimization, which is 3.2x boost up. And for an Intel(R) Xeon(R) CPU E3–1535M v6 @ 3.10GHz CPU, the average time per epoch is nearly 2.21 seconds, and it drops to 0.64 seconds upon proper optimization, which is a 3.45x boost up.\nThe optimization is not just in time, the optimized distribution also optimizes the CPU utilization which eventually leads to better heat management, and your laptops won’t get as heated as they used to while training a deep neural network.\nWe can see that without any optimization the CPU utilization while training maxes out to 100%, slowing down all the other processes and heating the system. However, with proper optimizations, the utilization drops to 70% for i7 and 65% for Xeon despite providing a performance gain in terms of time.\nThese two metrics can be summarized in relative terms as follows.\nIn the above graph, a lower value is better, that is in relative terms Intel Xeon with all the optimizations stands as the benchmark, and an Intel Core i7 processor takes almost twice as time as Xeon, per epoch, after optimizing its usage. The above graph clearly shows the bright side of Intel’s Python Optimization in terms of time taken to train a neural network and CPU’s usage.\nIntel Software has provided an exhaustive list of resources on how to set this up, but there are some issues which we may usually face. More details about distribution are available here. You can choose between the type of installation, that is, either native pip or conda. I prefer conda as it saves a ton of hassle for me and I can focus on ML rather than on solving compatibility issues for my libraries.\nYou can download Anaconda from here. Their website has all the steps listed to install Anaconda on windows, ubuntu and macOS environments, and are easy to follow.\nThis step is where it usually gets tricky. It is preferred to create a virtual environment for Intel distribution so that you can always add/change your optimized libraries at one place. Let’s create a new virtual environment with the name “intel.”\nconda create -n intel -c intel intelpython3_full\nHere -c represents channel, so instead of adding Intel as a channel, we call that channel via -c. Here, intelpython3_full will automatically fetch necessary libraries from Intel’s distribution and install them in your virtual environment. This command will install the following libraries.\nThe following NEW packages will be INSTALLED:asn1crypto intel/win-64::asn1crypto-0.24.0-py36_3bzip2 intel/win-64::bzip2-1.0.6-vc14_17certifi intel/win-64::certifi-2018.1.18-py36_2cffi intel/win-64::cffi-1.11.5-py36_3chardet intel/win-64::chardet-3.0.4-py36_3cryptography intel/win-64::cryptography-2.3-py36_1cycler intel/win-64::cycler-0.10.0-py36_7cython intel/win-64::cython-0.29.3-py36_1daal intel/win-64::daal-2019.3-intel_203daal4py intel/win-64::daal4py-2019.3-py36h7b7c402_6freetype intel/win-64::freetype-2.9-vc14_3funcsigs intel/win-64::funcsigs-1.0.2-py36_7icc_rt intel/win-64::icc_rt-2019.3-intel_203idna intel/win-64::idna-2.6-py36_3impi_rt intel/win-64::impi_rt-2019.3-intel_203intel-openmp intel/win-64::intel-openmp-2019.3-intel_203intelpython intel/win-64::intelpython-2019.3-0intelpython3_core intel/win-64::intelpython3_core-2019.3-0intelpython3_full intel/win-64::intelpython3_full-2019.3-0kiwisolver intel/win-64::kiwisolver-1.0.1-py36_2libpng intel/win-64::libpng-1.6.36-vc14_2llvmlite intel/win-64::llvmlite-0.27.1-py36_0matplotlib intel/win-64::matplotlib-3.0.1-py36_1menuinst intel/win-64::menuinst-1.4.1-py36_6mkl intel/win-64::mkl-2019.3-intel_203mkl-service intel/win-64::mkl-service-1.0.0-py36_7mkl_fft intel/win-64::mkl_fft-1.0.11-py36h7b7c402_0mkl_random intel/win-64::mkl_random-1.0.2-py36h7b7c402_4mpi4py intel/win-64::mpi4py-3.0.0-py36_3numba intel/win-64::numba-0.42.1-np116py36_0numexpr intel/win-64::numexpr-2.6.8-py36_2numpy intel/win-64::numpy-1.16.1-py36h7b7c402_3numpy-base intel/win-64::numpy-base-1.16.1-py36_3openssl intel/win-64::openssl-1.0.2r-vc14_0pandas intel/win-64::pandas-0.24.1-py36_3pip intel/win-64::pip-10.0.1-py36_0pycosat intel/win-64::pycosat-0.6.3-py36_3pycparser intel/win-64::pycparser-2.18-py36_2pyopenssl intel/win-64::pyopenssl-17.5.0-py36_2pyparsing intel/win-64::pyparsing-2.2.0-py36_2pysocks intel/win-64::pysocks-1.6.7-py36_1python intel/win-64::python-3.6.8-6python-dateutil intel/win-64::python-dateutil-2.6.0-py36_12pytz intel/win-64::pytz-2018.4-py36_3pyyaml intel/win-64::pyyaml-4.1-py36_3requests intel/win-64::requests-2.20.1-py36_1ruamel_yaml intel/win-64::ruamel_yaml-0.11.14-py36_4scikit-learn intel/win-64::scikit-learn-0.20.2-py36h7b7c402_2scipy intel/win-64::scipy-1.2.0-py36_3setuptools intel/win-64::setuptools-39.0.1-py36_0six intel/win-64::six-1.11.0-py36_3sqlite intel/win-64::sqlite-3.27.2-vc14_2tbb intel/win-64::tbb-2019.4-vc14_intel_203tbb4py intel/win-64::tbb4py-2019.4-py36_intel_0tcl intel/win-64::tcl-8.6.4-vc14_22tk intel/win-64::tk-8.6.4-vc14_28urllib3 intel/win-64::urllib3-1.24.1-py36_2vc intel/win-64::vc-14.0-2vs2015_runtime intel/win-64::vs2015_runtime-14.0.25420-intel_2wheel intel/win-64::wheel-0.31.0-py36_3win_inet_pton intel/win-64::win_inet_pton-1.0.1-py36_4wincertstore intel/win-64::wincertstore-0.2-py36_3xz intel/win-64::xz-5.2.3-vc14_2zlib intel/win-64::zlib-1.2.11-vc14h21ff451_5\nYou can see that for each library the wheel’s description starts with “Intel/...” this signifies that the said library is being downloaded from intel’s distribution channel. Once you give yes to install these libraries, they will start getting downloaded and installed.\nThis step is where the first issue comes. Sometimes, these libraries don’t get downloaded, and the list propagates, or we get an SSL error and the command exits. This issue may even be delayed, that is, right now everything will get downloaded and installed, but later on if you want to add any new library, the prompt will throw SSL errors. There’s an easy fix to this problem which needs to be done before creating the virtual environment for Intel as mentioned above.\nIn your shell or command prompt, turn off the anaconda’s default SSL verification via the following command\nconda config --set ssl_verify false\nOnce SLL verification is turned off, you can repeat step 2 by deleting the previously created environment and starting fresh.\nCongratulations!! Now you have set up the Intel’s python distribution in your PC/laptop. It’s time to enter the ML pipeline.\nIntel has provided optimization for tensorflow via all the distribution channels and is very smooth to set up. You can read more about it here. Let’s see how we can install optimized tensorflow for our CPU. Intel Software provides an optimized math kernel library(mkl) which optimizes the mathematical operations and provides the users with required speed-up. Thus, we will install tensorflow-mkl as follows.\nconda install tensorflow-mkl\nOr with pip, one can set it up as follows.\npip install intel-tensorflow\nVoila!! Tensorflow is now up and running in your system with necessary optimizations. And if you are a Keras fan then you can set it up with a simple command: -\nconda install keras -c intel\nSince we have created a new virtual environment, it will not come with spyder or jupyter notebooks by default. However, it is straightforward to set these up. With a single line, we can do wonders.\nconda install jupyter -c intel\nNow that we have set up everything, it’s time to get our hands dirty as we start coding and experimenting with various ML and DL approaches on our optimized CPU systems. Firstly, before executing any code, make sure that you are using the right environment. You need to activate the virtual environment before you can use the libraries installed in it. This activation step is an all-time process, and it is effortless. Write the following command in your anaconda prompt, and you’re good to go.\nconda activate intel\nTo make sanity checks on your environment, type the following in the command prompt/shell once the environment is activated.\npython\nOnce you press enter after typing python, the following text should appear in your command prompt. Make sure it says “Intel Corporation” between the pipe and has the message “Intel(R) Distribution for Python is brought to you by Intel Corporation.”. These validate the correct installation of Intel’s Python Distribution.\nPython 3.6.8 |Intel Corporation| (default, Feb 27 2019, 19:55:17) [MSC v.1900 64 bit (AMD64)] on win32Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.Intel(R) Distribution for Python is brought to you by Intel Corporation.Please check out: https://software.intel.com/en-us/python-distribution\nNow you can use the command line to experiment or write your scripts elsewhere and save them with .py extension. These files can then be accessed by navigating to the location of the file via “cd” command and running the script via: -\n(intel) C:\\Users\\User>python script.py\nBy following steps 1 to 4, you will have your system ready with the level of Intel xyz* as mentioned in the performance benchmark charts above. These are still not multi-processor-based thread optimized. I will discuss below how to achieve further optimization for your multi-core CPU.\nTo add further optimizations for your multi-core system, you can add the following lines of code to your .py file, and it will execute the scripts accordingly. Here NUM_PARALLEL_EXEC_UNITS represent the number of cores you have; I have a quad-core i7. Hence the number is 4. For Windows users, you can check the count of cores in your Task Manager via navigating to Task Manager -> Performance -> CPU -> Cores.\nfrom keras import backend as Kimport tensorflow as tfNUM_PARALLEL_EXEC_UNITS = 4config = tf.ConfigProto(intra_op_parallelism_threads=NUM_PARALLEL_EXEC_UNITS, inter_op_parallelism_threads=2, allow_soft_placement=True, device_count={'CPU': NUM_PARALLEL_EXEC_UNITS})session = tf.Session(config=config)K.set_session(session)os.environ[\"OMP_NUM_THREADS\"] = \"4\"os.environ[\"KMP_BLOCKTIME\"] = \"30\"os.environ[\"KMP_SETTINGS\"] = \"1\"os.environ[\"KMP_AFFINITY\"] = \"granularity=fine,verbose,compact,1,0\"\nIf you’re not using Keras and prefer using core tensorflow, then the script remains almost the same, just remove the following 2 lines.\nfrom keras import backend as KK.set_session(session)\nAfter adding these lines in your code, the speed-up should be comparable to Intel xyz(O) entries in the performance charts above.\nIf you have a GPU in your system and it is conflicting with the current set of libraries or throwing a cudnn error then you can add the following line in your code to disable the GPU.\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\"\nThat’s it. You have now an optimized pipeline to test and develop machine learning projects and ideas. This channel opens up a lot of opportunities for students involving themselves in academic research to carry on their work with whatever system they have. This pipeline will also prevent the worries of privacy of private data on which a practitioner might be working.\nIt is also to be observed that with proper fine-tuning, one can obtain a 3.45x speed-up in their workflow which means that if you are experimenting with your ideas, you can now work three times as fast as compared to before."},"parsed":{"kind":"list like","value":[{"code":null,"e":642,"s":171,"text":"In the last few years, Deep Learning has picked up pace in academia as well as industry. Every company is now looking for an A.I. based solution to problems. This boom has its own merits and demerits, but that’s for another article, another day. This surge in Machine Learning practitioners have infiltrated the academia to its roots, and almost every student from every domain has access to AI and ML knowledge via courses, MOOCs, books, articles, and of course papers."},{"code":null,"e":1554,"s":642,"text":"This rise was, however, bottlenecked by the availability of hardware resources. It was suggested and demonstrated that a Graphical Processing Unit is one of the best devices you can have to perform your ML tasks at pace. But a good high-performance GPU came with a price tag which can go even up to $20,449.00 for a single NVIDIA Tesla V100 32GB GPU which has server-like compute capabilities. Furthermore, a consumer laptop with a decent GPU costs around $2000 with a GPU like 1050Ti or 1080Ti. To ease the pain, Google, Kaggle, Intel, and Nvidia provides cloud-based High-Compute systems for free with a restriction on either space, compute capability, memory or time. But these online services have their drawbacks which include managing the data(upload/download), data privacy, etc. These issues lead to the main point of my article, “Why not optimize our CPUs to attain a speed-up in Deep Learning tasks?”."},{"code":null,"e":2077,"s":1554,"text":"Intel has provided optimizations for Python, Tensorflow, Pytorch, etc. with a whole range of Intel Optimized support libraries like NumPy, scikit-learn and many more. These are freely available to download and set-up and provides a speed of anywhere from 2x to even 5x on a CPU like Intel Core i7 which is not also a high-performance CPU like the Xeon Series. In the remaining part of the article, I will demonstrate how to set-up Intel’s optimizations in your PC/laptop and will provide the speed-up data that I observed."},{"code":null,"e":2187,"s":2077,"text":"For a variety of experiments mentioned below, I will present the time and utilization boosts that I observed."},{"code":null,"e":2397,"s":2187,"text":"10-layer Deep CNN for CIFAR-100 Image Classification.3 Layer Deep LSTM for IMDB Sentiment Analysis.6 Layer deep Dense ANN for MNIST image Classification.9 Layer deep Fully Convolutional Auto-Encoder for MNIST."},{"code":null,"e":2451,"s":2397,"text":"10-layer Deep CNN for CIFAR-100 Image Classification."},{"code":null,"e":2498,"s":2451,"text":"3 Layer Deep LSTM for IMDB Sentiment Analysis."},{"code":null,"e":2553,"s":2498,"text":"6 Layer deep Dense ANN for MNIST image Classification."},{"code":null,"e":2610,"s":2553,"text":"9 Layer deep Fully Convolutional Auto-Encoder for MNIST."},{"code":null,"e":2804,"s":2610,"text":"These tasks have been coded in Keras using tensorflow backend and datasets are available in the same hard-drive as that of the codes and executable libraries. The hard-drive utilized is an SSD."},{"code":null,"e":2869,"s":2804,"text":"We will consider six combinations of optimizations as following."},{"code":null,"e":3231,"s":2869,"text":"Intel(R) Core (TM) i7.Intel(R) Xeon(R) CPU E3–1535M v6.Intel(R) Core (TM) i7 with Intel Python (Intel i7*).Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python (Intel Xeon*).Intel(R) Core (TM) i7 with Intel Python and Processor Thread optimization (Intel i7(O)).Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python and Processor Thread optimization (Intel Xeon(O))."},{"code":null,"e":3254,"s":3231,"text":"Intel(R) Core (TM) i7."},{"code":null,"e":3288,"s":3254,"text":"Intel(R) Xeon(R) CPU E3–1535M v6."},{"code":null,"e":3341,"s":3288,"text":"Intel(R) Core (TM) i7 with Intel Python (Intel i7*)."},{"code":null,"e":3407,"s":3341,"text":"Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python (Intel Xeon*)."},{"code":null,"e":3496,"s":3407,"text":"Intel(R) Core (TM) i7 with Intel Python and Processor Thread optimization (Intel i7(O))."},{"code":null,"e":3598,"s":3496,"text":"Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python and Processor Thread optimization (Intel Xeon(O))."},{"code":null,"e":4061,"s":3598,"text":"For each task, the number epochs were fixed at 50. In the chart below we can see that for an Intel(R) Core (TM) i7–7700HQ CPU @ 2.80GHz CPU, the average time per epoch is nearly 4.67 seconds, and it drops to 1.48 seconds upon proper optimization, which is 3.2x boost up. And for an Intel(R) Xeon(R) CPU E3–1535M v6 @ 3.10GHz CPU, the average time per epoch is nearly 2.21 seconds, and it drops to 0.64 seconds upon proper optimization, which is a 3.45x boost up."},{"code":null,"e":4302,"s":4061,"text":"The optimization is not just in time, the optimized distribution also optimizes the CPU utilization which eventually leads to better heat management, and your laptops won’t get as heated as they used to while training a deep neural network."},{"code":null,"e":4602,"s":4302,"text":"We can see that without any optimization the CPU utilization while training maxes out to 100%, slowing down all the other processes and heating the system. However, with proper optimizations, the utilization drops to 70% for i7 and 65% for Xeon despite providing a performance gain in terms of time."},{"code":null,"e":4668,"s":4602,"text":"These two metrics can be summarized in relative terms as follows."},{"code":null,"e":5051,"s":4668,"text":"In the above graph, a lower value is better, that is in relative terms Intel Xeon with all the optimizations stands as the benchmark, and an Intel Core i7 processor takes almost twice as time as Xeon, per epoch, after optimizing its usage. The above graph clearly shows the bright side of Intel’s Python Optimization in terms of time taken to train a neural network and CPU’s usage."},{"code":null,"e":5459,"s":5051,"text":"Intel Software has provided an exhaustive list of resources on how to set this up, but there are some issues which we may usually face. More details about distribution are available here. You can choose between the type of installation, that is, either native pip or conda. I prefer conda as it saves a ton of hassle for me and I can focus on ML rather than on solving compatibility issues for my libraries."},{"code":null,"e":5622,"s":5459,"text":"You can download Anaconda from here. Their website has all the steps listed to install Anaconda on windows, ubuntu and macOS environments, and are easy to follow."},{"code":null,"e":5871,"s":5622,"text":"This step is where it usually gets tricky. It is preferred to create a virtual environment for Intel distribution so that you can always add/change your optimized libraries at one place. Let’s create a new virtual environment with the name “intel.”"},{"code":null,"e":5920,"s":5871,"text":"conda create -n intel -c intel intelpython3_full"},{"code":null,"e":6210,"s":5920,"text":"Here -c represents channel, so instead of adding Intel as a channel, we call that channel via -c. Here, intelpython3_full will automatically fetch necessary libraries from Intel’s distribution and install them in your virtual environment. This command will install the following libraries."},{"code":null,"e":9775,"s":6210,"text":"The following NEW packages will be INSTALLED:asn1crypto intel/win-64::asn1crypto-0.24.0-py36_3bzip2 intel/win-64::bzip2-1.0.6-vc14_17certifi intel/win-64::certifi-2018.1.18-py36_2cffi intel/win-64::cffi-1.11.5-py36_3chardet intel/win-64::chardet-3.0.4-py36_3cryptography intel/win-64::cryptography-2.3-py36_1cycler intel/win-64::cycler-0.10.0-py36_7cython intel/win-64::cython-0.29.3-py36_1daal intel/win-64::daal-2019.3-intel_203daal4py intel/win-64::daal4py-2019.3-py36h7b7c402_6freetype intel/win-64::freetype-2.9-vc14_3funcsigs intel/win-64::funcsigs-1.0.2-py36_7icc_rt intel/win-64::icc_rt-2019.3-intel_203idna intel/win-64::idna-2.6-py36_3impi_rt intel/win-64::impi_rt-2019.3-intel_203intel-openmp intel/win-64::intel-openmp-2019.3-intel_203intelpython intel/win-64::intelpython-2019.3-0intelpython3_core intel/win-64::intelpython3_core-2019.3-0intelpython3_full intel/win-64::intelpython3_full-2019.3-0kiwisolver intel/win-64::kiwisolver-1.0.1-py36_2libpng intel/win-64::libpng-1.6.36-vc14_2llvmlite intel/win-64::llvmlite-0.27.1-py36_0matplotlib intel/win-64::matplotlib-3.0.1-py36_1menuinst intel/win-64::menuinst-1.4.1-py36_6mkl intel/win-64::mkl-2019.3-intel_203mkl-service intel/win-64::mkl-service-1.0.0-py36_7mkl_fft intel/win-64::mkl_fft-1.0.11-py36h7b7c402_0mkl_random intel/win-64::mkl_random-1.0.2-py36h7b7c402_4mpi4py intel/win-64::mpi4py-3.0.0-py36_3numba intel/win-64::numba-0.42.1-np116py36_0numexpr intel/win-64::numexpr-2.6.8-py36_2numpy intel/win-64::numpy-1.16.1-py36h7b7c402_3numpy-base intel/win-64::numpy-base-1.16.1-py36_3openssl intel/win-64::openssl-1.0.2r-vc14_0pandas intel/win-64::pandas-0.24.1-py36_3pip intel/win-64::pip-10.0.1-py36_0pycosat intel/win-64::pycosat-0.6.3-py36_3pycparser intel/win-64::pycparser-2.18-py36_2pyopenssl intel/win-64::pyopenssl-17.5.0-py36_2pyparsing intel/win-64::pyparsing-2.2.0-py36_2pysocks intel/win-64::pysocks-1.6.7-py36_1python intel/win-64::python-3.6.8-6python-dateutil intel/win-64::python-dateutil-2.6.0-py36_12pytz intel/win-64::pytz-2018.4-py36_3pyyaml intel/win-64::pyyaml-4.1-py36_3requests intel/win-64::requests-2.20.1-py36_1ruamel_yaml intel/win-64::ruamel_yaml-0.11.14-py36_4scikit-learn intel/win-64::scikit-learn-0.20.2-py36h7b7c402_2scipy intel/win-64::scipy-1.2.0-py36_3setuptools intel/win-64::setuptools-39.0.1-py36_0six intel/win-64::six-1.11.0-py36_3sqlite intel/win-64::sqlite-3.27.2-vc14_2tbb intel/win-64::tbb-2019.4-vc14_intel_203tbb4py intel/win-64::tbb4py-2019.4-py36_intel_0tcl intel/win-64::tcl-8.6.4-vc14_22tk intel/win-64::tk-8.6.4-vc14_28urllib3 intel/win-64::urllib3-1.24.1-py36_2vc intel/win-64::vc-14.0-2vs2015_runtime intel/win-64::vs2015_runtime-14.0.25420-intel_2wheel intel/win-64::wheel-0.31.0-py36_3win_inet_pton intel/win-64::win_inet_pton-1.0.1-py36_4wincertstore intel/win-64::wincertstore-0.2-py36_3xz intel/win-64::xz-5.2.3-vc14_2zlib intel/win-64::zlib-1.2.11-vc14h21ff451_5"},{"code":null,"e":10045,"s":9775,"text":"You can see that for each library the wheel’s description starts with “Intel/...” this signifies that the said library is being downloaded from intel’s distribution channel. Once you give yes to install these libraries, they will start getting downloaded and installed."},{"code":null,"e":10516,"s":10045,"text":"This step is where the first issue comes. Sometimes, these libraries don’t get downloaded, and the list propagates, or we get an SSL error and the command exits. This issue may even be delayed, that is, right now everything will get downloaded and installed, but later on if you want to add any new library, the prompt will throw SSL errors. There’s an easy fix to this problem which needs to be done before creating the virtual environment for Intel as mentioned above."},{"code":null,"e":10624,"s":10516,"text":"In your shell or command prompt, turn off the anaconda’s default SSL verification via the following command"},{"code":null,"e":10660,"s":10624,"text":"conda config --set ssl_verify false"},{"code":null,"e":10786,"s":10660,"text":"Once SLL verification is turned off, you can repeat step 2 by deleting the previously created environment and starting fresh."},{"code":null,"e":10911,"s":10786,"text":"Congratulations!! Now you have set up the Intel’s python distribution in your PC/laptop. It’s time to enter the ML pipeline."},{"code":null,"e":11320,"s":10911,"text":"Intel has provided optimization for tensorflow via all the distribution channels and is very smooth to set up. You can read more about it here. Let’s see how we can install optimized tensorflow for our CPU. Intel Software provides an optimized math kernel library(mkl) which optimizes the mathematical operations and provides the users with required speed-up. Thus, we will install tensorflow-mkl as follows."},{"code":null,"e":11349,"s":11320,"text":"conda install tensorflow-mkl"},{"code":null,"e":11392,"s":11349,"text":"Or with pip, one can set it up as follows."},{"code":null,"e":11421,"s":11392,"text":"pip install intel-tensorflow"},{"code":null,"e":11582,"s":11421,"text":"Voila!! Tensorflow is now up and running in your system with necessary optimizations. And if you are a Keras fan then you can set it up with a simple command: -"},{"code":null,"e":11611,"s":11582,"text":"conda install keras -c intel"},{"code":null,"e":11809,"s":11611,"text":"Since we have created a new virtual environment, it will not come with spyder or jupyter notebooks by default. However, it is straightforward to set these up. With a single line, we can do wonders."},{"code":null,"e":11840,"s":11809,"text":"conda install jupyter -c intel"},{"code":null,"e":12336,"s":11840,"text":"Now that we have set up everything, it’s time to get our hands dirty as we start coding and experimenting with various ML and DL approaches on our optimized CPU systems. Firstly, before executing any code, make sure that you are using the right environment. You need to activate the virtual environment before you can use the libraries installed in it. This activation step is an all-time process, and it is effortless. Write the following command in your anaconda prompt, and you’re good to go."},{"code":null,"e":12357,"s":12336,"text":"conda activate intel"},{"code":null,"e":12482,"s":12357,"text":"To make sanity checks on your environment, type the following in the command prompt/shell once the environment is activated."},{"code":null,"e":12489,"s":12482,"text":"python"},{"code":null,"e":12811,"s":12489,"text":"Once you press enter after typing python, the following text should appear in your command prompt. Make sure it says “Intel Corporation” between the pipe and has the message “Intel(R) Distribution for Python is brought to you by Intel Corporation.”. These validate the correct installation of Intel’s Python Distribution."},{"code":null,"e":13126,"s":12811,"text":"Python 3.6.8 |Intel Corporation| (default, Feb 27 2019, 19:55:17) [MSC v.1900 64 bit (AMD64)] on win32Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.Intel(R) Distribution for Python is brought to you by Intel Corporation.Please check out: https://software.intel.com/en-us/python-distribution"},{"code":null,"e":13361,"s":13126,"text":"Now you can use the command line to experiment or write your scripts elsewhere and save them with .py extension. These files can then be accessed by navigating to the location of the file via “cd” command and running the script via: -"},{"code":null,"e":13400,"s":13361,"text":"(intel) C:\\Users\\User>python script.py"},{"code":null,"e":13686,"s":13400,"text":"By following steps 1 to 4, you will have your system ready with the level of Intel xyz* as mentioned in the performance benchmark charts above. These are still not multi-processor-based thread optimized. I will discuss below how to achieve further optimization for your multi-core CPU."},{"code":null,"e":14097,"s":13686,"text":"To add further optimizations for your multi-core system, you can add the following lines of code to your .py file, and it will execute the scripts accordingly. Here NUM_PARALLEL_EXEC_UNITS represent the number of cores you have; I have a quad-core i7. Hence the number is 4. For Windows users, you can check the count of cores in your Task Manager via navigating to Task Manager -> Performance -> CPU -> Cores."},{"code":null,"e":14608,"s":14097,"text":"from keras import backend as Kimport tensorflow as tfNUM_PARALLEL_EXEC_UNITS = 4config = tf.ConfigProto(intra_op_parallelism_threads=NUM_PARALLEL_EXEC_UNITS, inter_op_parallelism_threads=2, allow_soft_placement=True, device_count={'CPU': NUM_PARALLEL_EXEC_UNITS})session = tf.Session(config=config)K.set_session(session)os.environ[\"OMP_NUM_THREADS\"] = \"4\"os.environ[\"KMP_BLOCKTIME\"] = \"30\"os.environ[\"KMP_SETTINGS\"] = \"1\"os.environ[\"KMP_AFFINITY\"] = \"granularity=fine,verbose,compact,1,0\""},{"code":null,"e":14744,"s":14608,"text":"If you’re not using Keras and prefer using core tensorflow, then the script remains almost the same, just remove the following 2 lines."},{"code":null,"e":14797,"s":14744,"text":"from keras import backend as KK.set_session(session)"},{"code":null,"e":14927,"s":14797,"text":"After adding these lines in your code, the speed-up should be comparable to Intel xyz(O) entries in the performance charts above."},{"code":null,"e":15111,"s":14927,"text":"If you have a GPU in your system and it is conflicting with the current set of libraries or throwing a cudnn error then you can add the following line in your code to disable the GPU."},{"code":null,"e":15153,"s":15111,"text":"os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"-1\""},{"code":null,"e":15524,"s":15153,"text":"That’s it. You have now an optimized pipeline to test and develop machine learning projects and ideas. This channel opens up a lot of opportunities for students involving themselves in academic research to carry on their work with whatever system they have. This pipeline will also prevent the worries of privacy of private data on which a practitioner might be working."}],"string":"[\n {\n \"code\": null,\n \"e\": 642,\n \"s\": 171,\n \"text\": \"In the last few years, Deep Learning has picked up pace in academia as well as industry. Every company is now looking for an A.I. based solution to problems. This boom has its own merits and demerits, but that’s for another article, another day. This surge in Machine Learning practitioners have infiltrated the academia to its roots, and almost every student from every domain has access to AI and ML knowledge via courses, MOOCs, books, articles, and of course papers.\"\n },\n {\n \"code\": null,\n \"e\": 1554,\n \"s\": 642,\n \"text\": \"This rise was, however, bottlenecked by the availability of hardware resources. It was suggested and demonstrated that a Graphical Processing Unit is one of the best devices you can have to perform your ML tasks at pace. But a good high-performance GPU came with a price tag which can go even up to $20,449.00 for a single NVIDIA Tesla V100 32GB GPU which has server-like compute capabilities. Furthermore, a consumer laptop with a decent GPU costs around $2000 with a GPU like 1050Ti or 1080Ti. To ease the pain, Google, Kaggle, Intel, and Nvidia provides cloud-based High-Compute systems for free with a restriction on either space, compute capability, memory or time. But these online services have their drawbacks which include managing the data(upload/download), data privacy, etc. These issues lead to the main point of my article, “Why not optimize our CPUs to attain a speed-up in Deep Learning tasks?”.\"\n },\n {\n \"code\": null,\n \"e\": 2077,\n \"s\": 1554,\n \"text\": \"Intel has provided optimizations for Python, Tensorflow, Pytorch, etc. with a whole range of Intel Optimized support libraries like NumPy, scikit-learn and many more. These are freely available to download and set-up and provides a speed of anywhere from 2x to even 5x on a CPU like Intel Core i7 which is not also a high-performance CPU like the Xeon Series. In the remaining part of the article, I will demonstrate how to set-up Intel’s optimizations in your PC/laptop and will provide the speed-up data that I observed.\"\n },\n {\n \"code\": null,\n \"e\": 2187,\n \"s\": 2077,\n \"text\": \"For a variety of experiments mentioned below, I will present the time and utilization boosts that I observed.\"\n },\n {\n \"code\": null,\n \"e\": 2397,\n \"s\": 2187,\n \"text\": \"10-layer Deep CNN for CIFAR-100 Image Classification.3 Layer Deep LSTM for IMDB Sentiment Analysis.6 Layer deep Dense ANN for MNIST image Classification.9 Layer deep Fully Convolutional Auto-Encoder for MNIST.\"\n },\n {\n \"code\": null,\n \"e\": 2451,\n \"s\": 2397,\n \"text\": \"10-layer Deep CNN for CIFAR-100 Image Classification.\"\n },\n {\n \"code\": null,\n \"e\": 2498,\n \"s\": 2451,\n \"text\": \"3 Layer Deep LSTM for IMDB Sentiment Analysis.\"\n },\n {\n \"code\": null,\n \"e\": 2553,\n \"s\": 2498,\n \"text\": \"6 Layer deep Dense ANN for MNIST image Classification.\"\n },\n {\n \"code\": null,\n \"e\": 2610,\n \"s\": 2553,\n \"text\": \"9 Layer deep Fully Convolutional Auto-Encoder for MNIST.\"\n },\n {\n \"code\": null,\n \"e\": 2804,\n \"s\": 2610,\n \"text\": \"These tasks have been coded in Keras using tensorflow backend and datasets are available in the same hard-drive as that of the codes and executable libraries. The hard-drive utilized is an SSD.\"\n },\n {\n \"code\": null,\n \"e\": 2869,\n \"s\": 2804,\n \"text\": \"We will consider six combinations of optimizations as following.\"\n },\n {\n \"code\": null,\n \"e\": 3231,\n \"s\": 2869,\n \"text\": \"Intel(R) Core (TM) i7.Intel(R) Xeon(R) CPU E3–1535M v6.Intel(R) Core (TM) i7 with Intel Python (Intel i7*).Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python (Intel Xeon*).Intel(R) Core (TM) i7 with Intel Python and Processor Thread optimization (Intel i7(O)).Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python and Processor Thread optimization (Intel Xeon(O)).\"\n },\n {\n \"code\": null,\n \"e\": 3254,\n \"s\": 3231,\n \"text\": \"Intel(R) Core (TM) i7.\"\n },\n {\n \"code\": null,\n \"e\": 3288,\n \"s\": 3254,\n \"text\": \"Intel(R) Xeon(R) CPU E3–1535M v6.\"\n },\n {\n \"code\": null,\n \"e\": 3341,\n \"s\": 3288,\n \"text\": \"Intel(R) Core (TM) i7 with Intel Python (Intel i7*).\"\n },\n {\n \"code\": null,\n \"e\": 3407,\n \"s\": 3341,\n \"text\": \"Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python (Intel Xeon*).\"\n },\n {\n \"code\": null,\n \"e\": 3496,\n \"s\": 3407,\n \"text\": \"Intel(R) Core (TM) i7 with Intel Python and Processor Thread optimization (Intel i7(O)).\"\n },\n {\n \"code\": null,\n \"e\": 3598,\n \"s\": 3496,\n \"text\": \"Intel(R) Xeon(R) CPU E3–1535M v6 with Intel Python and Processor Thread optimization (Intel Xeon(O)).\"\n },\n {\n \"code\": null,\n \"e\": 4061,\n \"s\": 3598,\n \"text\": \"For each task, the number epochs were fixed at 50. In the chart below we can see that for an Intel(R) Core (TM) i7–7700HQ CPU @ 2.80GHz CPU, the average time per epoch is nearly 4.67 seconds, and it drops to 1.48 seconds upon proper optimization, which is 3.2x boost up. And for an Intel(R) Xeon(R) CPU E3–1535M v6 @ 3.10GHz CPU, the average time per epoch is nearly 2.21 seconds, and it drops to 0.64 seconds upon proper optimization, which is a 3.45x boost up.\"\n },\n {\n \"code\": null,\n \"e\": 4302,\n \"s\": 4061,\n \"text\": \"The optimization is not just in time, the optimized distribution also optimizes the CPU utilization which eventually leads to better heat management, and your laptops won’t get as heated as they used to while training a deep neural network.\"\n },\n {\n \"code\": null,\n \"e\": 4602,\n \"s\": 4302,\n \"text\": \"We can see that without any optimization the CPU utilization while training maxes out to 100%, slowing down all the other processes and heating the system. However, with proper optimizations, the utilization drops to 70% for i7 and 65% for Xeon despite providing a performance gain in terms of time.\"\n },\n {\n \"code\": null,\n \"e\": 4668,\n \"s\": 4602,\n \"text\": \"These two metrics can be summarized in relative terms as follows.\"\n },\n {\n \"code\": null,\n \"e\": 5051,\n \"s\": 4668,\n \"text\": \"In the above graph, a lower value is better, that is in relative terms Intel Xeon with all the optimizations stands as the benchmark, and an Intel Core i7 processor takes almost twice as time as Xeon, per epoch, after optimizing its usage. The above graph clearly shows the bright side of Intel’s Python Optimization in terms of time taken to train a neural network and CPU’s usage.\"\n },\n {\n \"code\": null,\n \"e\": 5459,\n \"s\": 5051,\n \"text\": \"Intel Software has provided an exhaustive list of resources on how to set this up, but there are some issues which we may usually face. More details about distribution are available here. You can choose between the type of installation, that is, either native pip or conda. I prefer conda as it saves a ton of hassle for me and I can focus on ML rather than on solving compatibility issues for my libraries.\"\n },\n {\n \"code\": null,\n \"e\": 5622,\n \"s\": 5459,\n \"text\": \"You can download Anaconda from here. Their website has all the steps listed to install Anaconda on windows, ubuntu and macOS environments, and are easy to follow.\"\n },\n {\n \"code\": null,\n \"e\": 5871,\n \"s\": 5622,\n \"text\": \"This step is where it usually gets tricky. It is preferred to create a virtual environment for Intel distribution so that you can always add/change your optimized libraries at one place. Let’s create a new virtual environment with the name “intel.”\"\n },\n {\n \"code\": null,\n \"e\": 5920,\n \"s\": 5871,\n \"text\": \"conda create -n intel -c intel intelpython3_full\"\n },\n {\n \"code\": null,\n \"e\": 6210,\n \"s\": 5920,\n \"text\": \"Here -c represents channel, so instead of adding Intel as a channel, we call that channel via -c. Here, intelpython3_full will automatically fetch necessary libraries from Intel’s distribution and install them in your virtual environment. This command will install the following libraries.\"\n },\n {\n \"code\": null,\n \"e\": 9775,\n \"s\": 6210,\n \"text\": \"The following NEW packages will be INSTALLED:asn1crypto intel/win-64::asn1crypto-0.24.0-py36_3bzip2 intel/win-64::bzip2-1.0.6-vc14_17certifi intel/win-64::certifi-2018.1.18-py36_2cffi intel/win-64::cffi-1.11.5-py36_3chardet intel/win-64::chardet-3.0.4-py36_3cryptography intel/win-64::cryptography-2.3-py36_1cycler intel/win-64::cycler-0.10.0-py36_7cython intel/win-64::cython-0.29.3-py36_1daal intel/win-64::daal-2019.3-intel_203daal4py intel/win-64::daal4py-2019.3-py36h7b7c402_6freetype intel/win-64::freetype-2.9-vc14_3funcsigs intel/win-64::funcsigs-1.0.2-py36_7icc_rt intel/win-64::icc_rt-2019.3-intel_203idna intel/win-64::idna-2.6-py36_3impi_rt intel/win-64::impi_rt-2019.3-intel_203intel-openmp intel/win-64::intel-openmp-2019.3-intel_203intelpython intel/win-64::intelpython-2019.3-0intelpython3_core intel/win-64::intelpython3_core-2019.3-0intelpython3_full intel/win-64::intelpython3_full-2019.3-0kiwisolver intel/win-64::kiwisolver-1.0.1-py36_2libpng intel/win-64::libpng-1.6.36-vc14_2llvmlite intel/win-64::llvmlite-0.27.1-py36_0matplotlib intel/win-64::matplotlib-3.0.1-py36_1menuinst intel/win-64::menuinst-1.4.1-py36_6mkl intel/win-64::mkl-2019.3-intel_203mkl-service intel/win-64::mkl-service-1.0.0-py36_7mkl_fft intel/win-64::mkl_fft-1.0.11-py36h7b7c402_0mkl_random intel/win-64::mkl_random-1.0.2-py36h7b7c402_4mpi4py intel/win-64::mpi4py-3.0.0-py36_3numba intel/win-64::numba-0.42.1-np116py36_0numexpr intel/win-64::numexpr-2.6.8-py36_2numpy intel/win-64::numpy-1.16.1-py36h7b7c402_3numpy-base intel/win-64::numpy-base-1.16.1-py36_3openssl intel/win-64::openssl-1.0.2r-vc14_0pandas intel/win-64::pandas-0.24.1-py36_3pip intel/win-64::pip-10.0.1-py36_0pycosat intel/win-64::pycosat-0.6.3-py36_3pycparser intel/win-64::pycparser-2.18-py36_2pyopenssl intel/win-64::pyopenssl-17.5.0-py36_2pyparsing intel/win-64::pyparsing-2.2.0-py36_2pysocks intel/win-64::pysocks-1.6.7-py36_1python intel/win-64::python-3.6.8-6python-dateutil intel/win-64::python-dateutil-2.6.0-py36_12pytz intel/win-64::pytz-2018.4-py36_3pyyaml intel/win-64::pyyaml-4.1-py36_3requests intel/win-64::requests-2.20.1-py36_1ruamel_yaml intel/win-64::ruamel_yaml-0.11.14-py36_4scikit-learn intel/win-64::scikit-learn-0.20.2-py36h7b7c402_2scipy intel/win-64::scipy-1.2.0-py36_3setuptools intel/win-64::setuptools-39.0.1-py36_0six intel/win-64::six-1.11.0-py36_3sqlite intel/win-64::sqlite-3.27.2-vc14_2tbb intel/win-64::tbb-2019.4-vc14_intel_203tbb4py intel/win-64::tbb4py-2019.4-py36_intel_0tcl intel/win-64::tcl-8.6.4-vc14_22tk intel/win-64::tk-8.6.4-vc14_28urllib3 intel/win-64::urllib3-1.24.1-py36_2vc intel/win-64::vc-14.0-2vs2015_runtime intel/win-64::vs2015_runtime-14.0.25420-intel_2wheel intel/win-64::wheel-0.31.0-py36_3win_inet_pton intel/win-64::win_inet_pton-1.0.1-py36_4wincertstore intel/win-64::wincertstore-0.2-py36_3xz intel/win-64::xz-5.2.3-vc14_2zlib intel/win-64::zlib-1.2.11-vc14h21ff451_5\"\n },\n {\n \"code\": null,\n \"e\": 10045,\n \"s\": 9775,\n \"text\": \"You can see that for each library the wheel’s description starts with “Intel/...” this signifies that the said library is being downloaded from intel’s distribution channel. Once you give yes to install these libraries, they will start getting downloaded and installed.\"\n },\n {\n \"code\": null,\n \"e\": 10516,\n \"s\": 10045,\n \"text\": \"This step is where the first issue comes. Sometimes, these libraries don’t get downloaded, and the list propagates, or we get an SSL error and the command exits. This issue may even be delayed, that is, right now everything will get downloaded and installed, but later on if you want to add any new library, the prompt will throw SSL errors. There’s an easy fix to this problem which needs to be done before creating the virtual environment for Intel as mentioned above.\"\n },\n {\n \"code\": null,\n \"e\": 10624,\n \"s\": 10516,\n \"text\": \"In your shell or command prompt, turn off the anaconda’s default SSL verification via the following command\"\n },\n {\n \"code\": null,\n \"e\": 10660,\n \"s\": 10624,\n \"text\": \"conda config --set ssl_verify false\"\n },\n {\n \"code\": null,\n \"e\": 10786,\n \"s\": 10660,\n \"text\": \"Once SLL verification is turned off, you can repeat step 2 by deleting the previously created environment and starting fresh.\"\n },\n {\n \"code\": null,\n \"e\": 10911,\n \"s\": 10786,\n \"text\": \"Congratulations!! Now you have set up the Intel’s python distribution in your PC/laptop. It’s time to enter the ML pipeline.\"\n },\n {\n \"code\": null,\n \"e\": 11320,\n \"s\": 10911,\n \"text\": \"Intel has provided optimization for tensorflow via all the distribution channels and is very smooth to set up. You can read more about it here. Let’s see how we can install optimized tensorflow for our CPU. Intel Software provides an optimized math kernel library(mkl) which optimizes the mathematical operations and provides the users with required speed-up. Thus, we will install tensorflow-mkl as follows.\"\n },\n {\n \"code\": null,\n \"e\": 11349,\n \"s\": 11320,\n \"text\": \"conda install tensorflow-mkl\"\n },\n {\n \"code\": null,\n \"e\": 11392,\n \"s\": 11349,\n \"text\": \"Or with pip, one can set it up as follows.\"\n },\n {\n \"code\": null,\n \"e\": 11421,\n \"s\": 11392,\n \"text\": \"pip install intel-tensorflow\"\n },\n {\n \"code\": null,\n \"e\": 11582,\n \"s\": 11421,\n \"text\": \"Voila!! Tensorflow is now up and running in your system with necessary optimizations. And if you are a Keras fan then you can set it up with a simple command: -\"\n },\n {\n \"code\": null,\n \"e\": 11611,\n \"s\": 11582,\n \"text\": \"conda install keras -c intel\"\n },\n {\n \"code\": null,\n \"e\": 11809,\n \"s\": 11611,\n \"text\": \"Since we have created a new virtual environment, it will not come with spyder or jupyter notebooks by default. However, it is straightforward to set these up. With a single line, we can do wonders.\"\n },\n {\n \"code\": null,\n \"e\": 11840,\n \"s\": 11809,\n \"text\": \"conda install jupyter -c intel\"\n },\n {\n \"code\": null,\n \"e\": 12336,\n \"s\": 11840,\n \"text\": \"Now that we have set up everything, it’s time to get our hands dirty as we start coding and experimenting with various ML and DL approaches on our optimized CPU systems. Firstly, before executing any code, make sure that you are using the right environment. You need to activate the virtual environment before you can use the libraries installed in it. This activation step is an all-time process, and it is effortless. Write the following command in your anaconda prompt, and you’re good to go.\"\n },\n {\n \"code\": null,\n \"e\": 12357,\n \"s\": 12336,\n \"text\": \"conda activate intel\"\n },\n {\n \"code\": null,\n \"e\": 12482,\n \"s\": 12357,\n \"text\": \"To make sanity checks on your environment, type the following in the command prompt/shell once the environment is activated.\"\n },\n {\n \"code\": null,\n \"e\": 12489,\n \"s\": 12482,\n \"text\": \"python\"\n },\n {\n \"code\": null,\n \"e\": 12811,\n \"s\": 12489,\n \"text\": \"Once you press enter after typing python, the following text should appear in your command prompt. Make sure it says “Intel Corporation” between the pipe and has the message “Intel(R) Distribution for Python is brought to you by Intel Corporation.”. These validate the correct installation of Intel’s Python Distribution.\"\n },\n {\n \"code\": null,\n \"e\": 13126,\n \"s\": 12811,\n \"text\": \"Python 3.6.8 |Intel Corporation| (default, Feb 27 2019, 19:55:17) [MSC v.1900 64 bit (AMD64)] on win32Type \\\"help\\\", \\\"copyright\\\", \\\"credits\\\" or \\\"license\\\" for more information.Intel(R) Distribution for Python is brought to you by Intel Corporation.Please check out: https://software.intel.com/en-us/python-distribution\"\n },\n {\n \"code\": null,\n \"e\": 13361,\n \"s\": 13126,\n \"text\": \"Now you can use the command line to experiment or write your scripts elsewhere and save them with .py extension. These files can then be accessed by navigating to the location of the file via “cd” command and running the script via: -\"\n },\n {\n \"code\": null,\n \"e\": 13400,\n \"s\": 13361,\n \"text\": \"(intel) C:\\\\Users\\\\User>python script.py\"\n },\n {\n \"code\": null,\n \"e\": 13686,\n \"s\": 13400,\n \"text\": \"By following steps 1 to 4, you will have your system ready with the level of Intel xyz* as mentioned in the performance benchmark charts above. These are still not multi-processor-based thread optimized. I will discuss below how to achieve further optimization for your multi-core CPU.\"\n },\n {\n \"code\": null,\n \"e\": 14097,\n \"s\": 13686,\n \"text\": \"To add further optimizations for your multi-core system, you can add the following lines of code to your .py file, and it will execute the scripts accordingly. Here NUM_PARALLEL_EXEC_UNITS represent the number of cores you have; I have a quad-core i7. Hence the number is 4. For Windows users, you can check the count of cores in your Task Manager via navigating to Task Manager -> Performance -> CPU -> Cores.\"\n },\n {\n \"code\": null,\n \"e\": 14608,\n \"s\": 14097,\n \"text\": \"from keras import backend as Kimport tensorflow as tfNUM_PARALLEL_EXEC_UNITS = 4config = tf.ConfigProto(intra_op_parallelism_threads=NUM_PARALLEL_EXEC_UNITS, inter_op_parallelism_threads=2, allow_soft_placement=True, device_count={'CPU': NUM_PARALLEL_EXEC_UNITS})session = tf.Session(config=config)K.set_session(session)os.environ[\\\"OMP_NUM_THREADS\\\"] = \\\"4\\\"os.environ[\\\"KMP_BLOCKTIME\\\"] = \\\"30\\\"os.environ[\\\"KMP_SETTINGS\\\"] = \\\"1\\\"os.environ[\\\"KMP_AFFINITY\\\"] = \\\"granularity=fine,verbose,compact,1,0\\\"\"\n },\n {\n \"code\": null,\n \"e\": 14744,\n \"s\": 14608,\n \"text\": \"If you’re not using Keras and prefer using core tensorflow, then the script remains almost the same, just remove the following 2 lines.\"\n },\n {\n \"code\": null,\n \"e\": 14797,\n \"s\": 14744,\n \"text\": \"from keras import backend as KK.set_session(session)\"\n },\n {\n \"code\": null,\n \"e\": 14927,\n \"s\": 14797,\n \"text\": \"After adding these lines in your code, the speed-up should be comparable to Intel xyz(O) entries in the performance charts above.\"\n },\n {\n \"code\": null,\n \"e\": 15111,\n \"s\": 14927,\n \"text\": \"If you have a GPU in your system and it is conflicting with the current set of libraries or throwing a cudnn error then you can add the following line in your code to disable the GPU.\"\n },\n {\n \"code\": null,\n \"e\": 15153,\n \"s\": 15111,\n \"text\": \"os.environ[\\\"CUDA_VISIBLE_DEVICES\\\"] = \\\"-1\\\"\"\n },\n {\n \"code\": null,\n \"e\": 15524,\n \"s\": 15153,\n \"text\": \"That’s it. You have now an optimized pipeline to test and develop machine learning projects and ideas. This channel opens up a lot of opportunities for students involving themselves in academic research to carry on their work with whatever system they have. This pipeline will also prevent the worries of privacy of private data on which a practitioner might be working.\"\n }\n]"}}},{"rowIdx":554,"cells":{"title":{"kind":"string","value":"Machine Learning Pipelines: Nonlinear Model Stacking | by Lester Leong | Towards Data Science"},"text":{"kind":"string","value":"Normally, we face data sets that are fairly linear or can be manipulated into one. But what if the data set that we are examining really should be looked at in a nonlinear way? Step into the world of nonlinear feature engineering. First, we’ll look at examples of nonlinear data. Next, we’ll briefly discuss the K-means algorithm as a means to nonlinear feature engineering. Lastly, we’ll apply K-means stacked on top of logistic regression to build a superior model for classification.\nNonlinear data occurs quite often in the business world. Examples include, segmenting group behavior (marketing), patterns in inventory by group activity (sales), anomaly detection from previous transactions (finance), etc. [1]. To a more concrete example (supply chain / logistics), we can even see it in a visualization of truck driver data of speeding against distance [1]:\nFrom a quick glance, we can see that there are at least 2 groups within this data set. A group split between above 100 distance and below 100 distance. Intuitively, we can see that fitting a linear model here would be horrendous. Thus, we need a different type model. Applying K-means, we can actually find four groups as seen below [1]:\nWith K-means, we can now assign additional analysis on the above drivers’ data set to produce predictive insights to help businesses categorize drivers’ distance traveled and their speeding patterns. In our case, we’ll apply K-means to our own fictitious data set to save us more steps of feature engineering real life data.\nBefore we begin constructing our data, let’s take some time to go over what K-means actually is. K-means is an algorithm that looks for a certain number of clusters within an unlabeled data set [2]. Take note of the word unlabeled. This means that K-means is an unsupervised learning model. This is super helpful, when you get data but don’t really know how to label it. K-means can help out by labeling groups for you — pretty cool!\nFor our data, we’ll use the make_circles data from sklearn [3]. Alright, let’s get to our hands on example:\n#Load up our packagesimport pandas as pdimport numpy as npimport sklearnimport scipyimport seaborn as snsfrom sklearn.cluster import KMeansfrom sklearn.preprocessing import OneHotEncoderfrom scipy.spatial import Voronoi, voronoi_plot_2dfrom sklearn.data sets.samples_generator import make_circlesfrom sklearn.linear_model import LogisticRegressionfrom sklearn.neighbors import KNeighborsClassifierimport matplotlib.pyplot as plt%matplotlib notebook\nOur next step is to use create a K-means class. For those of you unfamiliar with classes (not a subject you take in school), think of a class in coding as a super function that has a lot of functions inside it. Now, I know there’s already a k-means clustering algorithm in sklearn, but I really like this class made by Alice Zheng due to detailed comments and the visualization that we’ll soon see [4]:\nclass KMeansFeaturizer: \"\"\"Transforms numeric data into k-means cluster memberships. This transformer runs k-means on the input data and converts each data point into the id of the closest cluster. If a target variable is present, it is scaled and included as input to k-means in order to derive clusters that obey the classification boundary as well as group similar points together. Parameters ---------- k: integer, optional, default 100 The number of clusters to group data into. target_scale: float, [0, infty], optional, default 5.0 The scaling factor for the target variable. Set this to zero to ignore the target. For classification problems, larger `target_scale` values will produce clusters that better respect the class boundary. random_state : integer or numpy.RandomState, optional This is passed to k-means as the generator used to initialize the kmeans centers. If an integer is given, it fixes the seed. Defaults to the global numpy random number generator. Attributes ---------- cluster_centers_ : array, [k, n_features] Coordinates of cluster centers. n_features does count the target column. \"\"\" def __init__(self, k=100, target_scale=5.0, random_state=None): self.k = k self.target_scale = target_scale self.random_state = random_state self.cluster_encoder = OneHotEncoder().fit(np.array(range(k)).reshape(-1,1)) def fit(self, X, y=None): \"\"\"Runs k-means on the input data and find centroids. If no target is given (`y` is None) then run vanilla k-means on input `X`. If target `y` is given, then include the target (weighted by `target_scale`) as an extra dimension for k-means clustering. In this case, run k-means twice, first with the target, then an extra iteration without. After fitting, the attribute `cluster_centers_` are set to the k-means centroids in the input space represented by `X`. Parameters ---------- X : array-like or sparse matrix, shape=(n_data_points, n_features) y : vector of length n_data_points, optional, default None If provided, will be weighted with `target_scale` and included in k-means clustering as hint. \"\"\" if y is None: # No target variable, just do plain k-means km_model = KMeans(n_clusters=self.k, n_init=20, random_state=self.random_state) km_model.fit(X) self.km_model_ = km_model self.cluster_centers_ = km_model.cluster_centers_ return self # There is target information. Apply appropriate scaling and include # into input data to k-means data_with_target = np.hstack((X, y[:,np.newaxis]*self.target_scale)) # Build a pre-training k-means model on data and target km_model_pretrain = KMeans(n_clusters=self.k, n_init=20, random_state=self.random_state) km_model_pretrain.fit(data_with_target) # Run k-means a second time to get the clusters in the original space # without target info. Initialize using centroids found in pre-training. # Go through a single iteration of cluster assignment and centroid # recomputation. km_model = KMeans(n_clusters=self.k, init=km_model_pretrain.cluster_centers_[:,:2], n_init=1, max_iter=1) km_model.fit(X) self.km_model = km_model self.cluster_centers_ = km_model.cluster_centers_ return self def transform(self, X, y=None): \"\"\"Outputs the closest cluster id for each input data point. Parameters ---------- X : array-like or sparse matrix, shape=(n_data_points, n_features) y : vector of length n_data_points, optional, default None Target vector is ignored even if provided. Returns ------- cluster_ids : array, shape[n_data_points,1] \"\"\" clusters = self.km_model.predict(X) return self.cluster_encoder.transform(clusters.reshape(-1,1)) def fit_transform(self, X, y=None): \"\"\"Runs fit followed by transform. \"\"\" self.fit(X, y) return self.transform(X, y)\nDon’t let that huge amount of text bother you. I just put it there incase you wanted to experiment with it on your own projects. Afterwards, we’ll create our training/test set, and set the seed to 420 to get the same results:\n# Creating our training and test setseed = 420training_data, training_labels = make_circles(n_samples=2000, factor=0.2)kmf_hint = KMeansFeaturizer(k=100, target_scale=10, random_state=seed).fit(training_data, training_labels)kmf_no_hint = KMeansFeaturizer(k=100, target_scale=0, random_state=seed).fit(training_data, training_labels)def kmeans_voronoi_plot(X, y, cluster_centers, ax): #Plots Voronoi diagram of k-means clusters overlaid with data ax.scatter(X[:, 0], X[:, 1], c=y, cmap='Set1', alpha=0.2) vor = Voronoi(cluster_centers) voronoi_plot_2d(vor, ax=ax, show_vertices=False, alpha=0.5)\nNow, let’s look at our unlabeled nonlinear data:\n#looking at circles datadf = pd.DataFrame(training_data)ax = sns.scatterplot(x=0, y=1, data=df)\nJust like our into data set of the drivers’, our circle within a circle is definitely not a linear data set. Next, we’ll apply K-means comparing visual results with giving it a hint on what we think and no hints:\n#With hintfig = plt.figure()ax = plt.subplot(211, aspect='equal')kmeans_voronoi_plot(training_data, training_labels, kmf_hint.cluster_centers_, ax)ax.set_title('K-Means with Target Hint')#Without hintax2 = plt.subplot(212, aspect='equal')kmeans_voronoi_plot(training_data, training_labels, kmf_no_hint.cluster_centers_, ax2)ax2.set_title('K-Means without Target Hint')\nI find that in hint versus no hint that the results are fairly close. If you want more automation, then you might want to apply no hint. But if you can spend some time looking at your data set to give it a hint, I would. The reason is it could save you some time in running the model, so k-means spends less time figuring out on its own. Another reason to give k-means a hint is you have domain expertise in your data set and know there are a specific number of clusters.\nTime for the fun part — making the stacked model. Some of you might be asking, what’s the difference between stacked model and ensemble model. An ensemble model combines multiple machine learning models to make another model [5]. So, not much. I think model stacking is more precise here, since k-means is feeding into logistic regression. If we could draw a Venn diagram, we would find stacked models inside the concept of ensemble model. I couldn’t find a good example on Google images, so I applied the magic of MS paint to present a rough illustration for your viewing pleasure:\nOk, art class over and back to coding. We’re going to do a ROC curve of kNN, logistic regression (LR), and k-means feeding into logistic regression.\n#Generate test data from same distribution of training datatest_data, test_labels = make_moons(n_samples=2000, noise=0.3, random_state=seed+5)training_cluster_features = kmf_hint.transform(training_data)test_cluster_features = kmf_hint.transform(test_data)training_with_cluster = scipy.sparse.hstack((training_data, training_cluster_features))test_with_cluster = scipy.sparse.hstack((test_data, test_cluster_features))#Run the modelslr_cluster = LogisticRegression(random_state=seed).fit(training_with_cluster, training_labels)classifier_names = ['LR', 'kNN']classifiers = [LogisticRegression(random_state=seed), KNeighborsClassifier(5)]for model in classifiers: model.fit(training_data, training_labels) #Plot the ROCdef test_roc(model, data, labels): if hasattr(model, \"decision_function\"): predictions = model.decision_function(data) else: predictions = model.predict_proba(data)[:,1] fpr, tpr, _ = sklearn.metrics.roc_curve(labels, predictions) return fpr, tprplt.figure()fpr_cluster, tpr_cluster = test_roc(lr_cluster, test_with_cluster, test_labels)plt.plot(fpr_cluster, tpr_cluster, 'r-', label='LR with k-means')for i, model in enumerate(classifiers): fpr, tpr = test_roc(model, test_data, test_labels) plt.plot(fpr, tpr, label=classifier_names[i]) plt.plot([0, 1], [0, 1], 'k--')plt.legend()plt.xlabel('False Positive Rate', fontsize=14)plt.ylabel('True Positive Rate', fontsize=14)\nAlright, first time I saw a ROC curve, I was like how do I read this thing? Well, what you want is the model that shoots up to the top left corner the fastest. In this case, our most accurate model is the stacked model — linear regression with k-means. The classification that our models worked on is picking where, which data point belongs to the big circle or small circle.\nPhew, we covered quite a few things here. First, we look at nonlinear data and examples that we might face in the real world. Second, we looked over k-means as a tool to discover more features about our data that was not there before. Next, we applied k-means to our own data set. Lastly, we stacked k-means into logistic regression to make a superior model. Pretty cool stuff overall. Somethings to note, we didn’t tune the models, which would change the performance nor did we compare that many models. But combining unsupervised learning into your supervised models could prove pretty useful and help you deliver insights you couldn’t get otherwise!\nDisclaimer: All things stated in this article are of my own opinion and not of any employer. Also sprinkled affiliate links.\n[1] A, Trevino, Introduction to K-means Clustering (2016), https://www.datascience.com/blog/k-means-clustering\n[2] J, VanderPlas, Python Data Science Handbook: Essential Tools for Working with Data (2016), https://amzn.to/2SMdZue\n[3] Scikit-learn Developers, sklearn.data sets.make_circles (2019), https://scikit-learn.org/stable/modules/generated/sklearn.data sets.make_circles.html#sklearn.data sets.make_circles\n[4] A, Zheng, et al, Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists (2018), https://amzn.to/2SOFh3q\n[5] F, Gunes, Why do stacked ensemble models win data science competitions? (2017), https://blogs.sas.com/content/subconsciousmusings/2017/05/18/stacked-ensemble-models-win-data-science-competitions/"},"parsed":{"kind":"list like","value":[{"code":null,"e":659,"s":172,"text":"Normally, we face data sets that are fairly linear or can be manipulated into one. But what if the data set that we are examining really should be looked at in a nonlinear way? Step into the world of nonlinear feature engineering. First, we’ll look at examples of nonlinear data. Next, we’ll briefly discuss the K-means algorithm as a means to nonlinear feature engineering. Lastly, we’ll apply K-means stacked on top of logistic regression to build a superior model for classification."},{"code":null,"e":1036,"s":659,"text":"Nonlinear data occurs quite often in the business world. Examples include, segmenting group behavior (marketing), patterns in inventory by group activity (sales), anomaly detection from previous transactions (finance), etc. [1]. To a more concrete example (supply chain / logistics), we can even see it in a visualization of truck driver data of speeding against distance [1]:"},{"code":null,"e":1374,"s":1036,"text":"From a quick glance, we can see that there are at least 2 groups within this data set. A group split between above 100 distance and below 100 distance. Intuitively, we can see that fitting a linear model here would be horrendous. Thus, we need a different type model. Applying K-means, we can actually find four groups as seen below [1]:"},{"code":null,"e":1699,"s":1374,"text":"With K-means, we can now assign additional analysis on the above drivers’ data set to produce predictive insights to help businesses categorize drivers’ distance traveled and their speeding patterns. In our case, we’ll apply K-means to our own fictitious data set to save us more steps of feature engineering real life data."},{"code":null,"e":2133,"s":1699,"text":"Before we begin constructing our data, let’s take some time to go over what K-means actually is. K-means is an algorithm that looks for a certain number of clusters within an unlabeled data set [2]. Take note of the word unlabeled. This means that K-means is an unsupervised learning model. This is super helpful, when you get data but don’t really know how to label it. K-means can help out by labeling groups for you — pretty cool!"},{"code":null,"e":2241,"s":2133,"text":"For our data, we’ll use the make_circles data from sklearn [3]. Alright, let’s get to our hands on example:"},{"code":null,"e":2690,"s":2241,"text":"#Load up our packagesimport pandas as pdimport numpy as npimport sklearnimport scipyimport seaborn as snsfrom sklearn.cluster import KMeansfrom sklearn.preprocessing import OneHotEncoderfrom scipy.spatial import Voronoi, voronoi_plot_2dfrom sklearn.data sets.samples_generator import make_circlesfrom sklearn.linear_model import LogisticRegressionfrom sklearn.neighbors import KNeighborsClassifierimport matplotlib.pyplot as plt%matplotlib notebook"},{"code":null,"e":3093,"s":2690,"text":"Our next step is to use create a K-means class. For those of you unfamiliar with classes (not a subject you take in school), think of a class in coding as a super function that has a lot of functions inside it. Now, I know there’s already a k-means clustering algorithm in sklearn, but I really like this class made by Alice Zheng due to detailed comments and the visualization that we’ll soon see [4]:"},{"code":null,"e":7567,"s":3093,"text":"class KMeansFeaturizer: \"\"\"Transforms numeric data into k-means cluster memberships. This transformer runs k-means on the input data and converts each data point into the id of the closest cluster. If a target variable is present, it is scaled and included as input to k-means in order to derive clusters that obey the classification boundary as well as group similar points together. Parameters ---------- k: integer, optional, default 100 The number of clusters to group data into. target_scale: float, [0, infty], optional, default 5.0 The scaling factor for the target variable. Set this to zero to ignore the target. For classification problems, larger `target_scale` values will produce clusters that better respect the class boundary. random_state : integer or numpy.RandomState, optional This is passed to k-means as the generator used to initialize the kmeans centers. If an integer is given, it fixes the seed. Defaults to the global numpy random number generator. Attributes ---------- cluster_centers_ : array, [k, n_features] Coordinates of cluster centers. n_features does count the target column. \"\"\" def __init__(self, k=100, target_scale=5.0, random_state=None): self.k = k self.target_scale = target_scale self.random_state = random_state self.cluster_encoder = OneHotEncoder().fit(np.array(range(k)).reshape(-1,1)) def fit(self, X, y=None): \"\"\"Runs k-means on the input data and find centroids. If no target is given (`y` is None) then run vanilla k-means on input `X`. If target `y` is given, then include the target (weighted by `target_scale`) as an extra dimension for k-means clustering. In this case, run k-means twice, first with the target, then an extra iteration without. After fitting, the attribute `cluster_centers_` are set to the k-means centroids in the input space represented by `X`. Parameters ---------- X : array-like or sparse matrix, shape=(n_data_points, n_features) y : vector of length n_data_points, optional, default None If provided, will be weighted with `target_scale` and included in k-means clustering as hint. \"\"\" if y is None: # No target variable, just do plain k-means km_model = KMeans(n_clusters=self.k, n_init=20, random_state=self.random_state) km_model.fit(X) self.km_model_ = km_model self.cluster_centers_ = km_model.cluster_centers_ return self # There is target information. Apply appropriate scaling and include # into input data to k-means data_with_target = np.hstack((X, y[:,np.newaxis]*self.target_scale)) # Build a pre-training k-means model on data and target km_model_pretrain = KMeans(n_clusters=self.k, n_init=20, random_state=self.random_state) km_model_pretrain.fit(data_with_target) # Run k-means a second time to get the clusters in the original space # without target info. Initialize using centroids found in pre-training. # Go through a single iteration of cluster assignment and centroid # recomputation. km_model = KMeans(n_clusters=self.k, init=km_model_pretrain.cluster_centers_[:,:2], n_init=1, max_iter=1) km_model.fit(X) self.km_model = km_model self.cluster_centers_ = km_model.cluster_centers_ return self def transform(self, X, y=None): \"\"\"Outputs the closest cluster id for each input data point. Parameters ---------- X : array-like or sparse matrix, shape=(n_data_points, n_features) y : vector of length n_data_points, optional, default None Target vector is ignored even if provided. Returns ------- cluster_ids : array, shape[n_data_points,1] \"\"\" clusters = self.km_model.predict(X) return self.cluster_encoder.transform(clusters.reshape(-1,1)) def fit_transform(self, X, y=None): \"\"\"Runs fit followed by transform. \"\"\" self.fit(X, y) return self.transform(X, y)"},{"code":null,"e":7793,"s":7567,"text":"Don’t let that huge amount of text bother you. I just put it there incase you wanted to experiment with it on your own projects. Afterwards, we’ll create our training/test set, and set the seed to 420 to get the same results:"},{"code":null,"e":8401,"s":7793,"text":"# Creating our training and test setseed = 420training_data, training_labels = make_circles(n_samples=2000, factor=0.2)kmf_hint = KMeansFeaturizer(k=100, target_scale=10, random_state=seed).fit(training_data, training_labels)kmf_no_hint = KMeansFeaturizer(k=100, target_scale=0, random_state=seed).fit(training_data, training_labels)def kmeans_voronoi_plot(X, y, cluster_centers, ax): #Plots Voronoi diagram of k-means clusters overlaid with data ax.scatter(X[:, 0], X[:, 1], c=y, cmap='Set1', alpha=0.2) vor = Voronoi(cluster_centers) voronoi_plot_2d(vor, ax=ax, show_vertices=False, alpha=0.5)"},{"code":null,"e":8450,"s":8401,"text":"Now, let’s look at our unlabeled nonlinear data:"},{"code":null,"e":8546,"s":8450,"text":"#looking at circles datadf = pd.DataFrame(training_data)ax = sns.scatterplot(x=0, y=1, data=df)"},{"code":null,"e":8759,"s":8546,"text":"Just like our into data set of the drivers’, our circle within a circle is definitely not a linear data set. Next, we’ll apply K-means comparing visual results with giving it a hint on what we think and no hints:"},{"code":null,"e":9128,"s":8759,"text":"#With hintfig = plt.figure()ax = plt.subplot(211, aspect='equal')kmeans_voronoi_plot(training_data, training_labels, kmf_hint.cluster_centers_, ax)ax.set_title('K-Means with Target Hint')#Without hintax2 = plt.subplot(212, aspect='equal')kmeans_voronoi_plot(training_data, training_labels, kmf_no_hint.cluster_centers_, ax2)ax2.set_title('K-Means without Target Hint')"},{"code":null,"e":9600,"s":9128,"text":"I find that in hint versus no hint that the results are fairly close. If you want more automation, then you might want to apply no hint. But if you can spend some time looking at your data set to give it a hint, I would. The reason is it could save you some time in running the model, so k-means spends less time figuring out on its own. Another reason to give k-means a hint is you have domain expertise in your data set and know there are a specific number of clusters."},{"code":null,"e":10183,"s":9600,"text":"Time for the fun part — making the stacked model. Some of you might be asking, what’s the difference between stacked model and ensemble model. An ensemble model combines multiple machine learning models to make another model [5]. So, not much. I think model stacking is more precise here, since k-means is feeding into logistic regression. If we could draw a Venn diagram, we would find stacked models inside the concept of ensemble model. I couldn’t find a good example on Google images, so I applied the magic of MS paint to present a rough illustration for your viewing pleasure:"},{"code":null,"e":10332,"s":10183,"text":"Ok, art class over and back to coding. We’re going to do a ROC curve of kNN, logistic regression (LR), and k-means feeding into logistic regression."},{"code":null,"e":11801,"s":10332,"text":"#Generate test data from same distribution of training datatest_data, test_labels = make_moons(n_samples=2000, noise=0.3, random_state=seed+5)training_cluster_features = kmf_hint.transform(training_data)test_cluster_features = kmf_hint.transform(test_data)training_with_cluster = scipy.sparse.hstack((training_data, training_cluster_features))test_with_cluster = scipy.sparse.hstack((test_data, test_cluster_features))#Run the modelslr_cluster = LogisticRegression(random_state=seed).fit(training_with_cluster, training_labels)classifier_names = ['LR', 'kNN']classifiers = [LogisticRegression(random_state=seed), KNeighborsClassifier(5)]for model in classifiers: model.fit(training_data, training_labels) #Plot the ROCdef test_roc(model, data, labels): if hasattr(model, \"decision_function\"): predictions = model.decision_function(data) else: predictions = model.predict_proba(data)[:,1] fpr, tpr, _ = sklearn.metrics.roc_curve(labels, predictions) return fpr, tprplt.figure()fpr_cluster, tpr_cluster = test_roc(lr_cluster, test_with_cluster, test_labels)plt.plot(fpr_cluster, tpr_cluster, 'r-', label='LR with k-means')for i, model in enumerate(classifiers): fpr, tpr = test_roc(model, test_data, test_labels) plt.plot(fpr, tpr, label=classifier_names[i]) plt.plot([0, 1], [0, 1], 'k--')plt.legend()plt.xlabel('False Positive Rate', fontsize=14)plt.ylabel('True Positive Rate', fontsize=14)"},{"code":null,"e":12177,"s":11801,"text":"Alright, first time I saw a ROC curve, I was like how do I read this thing? Well, what you want is the model that shoots up to the top left corner the fastest. In this case, our most accurate model is the stacked model — linear regression with k-means. The classification that our models worked on is picking where, which data point belongs to the big circle or small circle."},{"code":null,"e":12830,"s":12177,"text":"Phew, we covered quite a few things here. First, we look at nonlinear data and examples that we might face in the real world. Second, we looked over k-means as a tool to discover more features about our data that was not there before. Next, we applied k-means to our own data set. Lastly, we stacked k-means into logistic regression to make a superior model. Pretty cool stuff overall. Somethings to note, we didn’t tune the models, which would change the performance nor did we compare that many models. But combining unsupervised learning into your supervised models could prove pretty useful and help you deliver insights you couldn’t get otherwise!"},{"code":null,"e":12955,"s":12830,"text":"Disclaimer: All things stated in this article are of my own opinion and not of any employer. Also sprinkled affiliate links."},{"code":null,"e":13066,"s":12955,"text":"[1] A, Trevino, Introduction to K-means Clustering (2016), https://www.datascience.com/blog/k-means-clustering"},{"code":null,"e":13185,"s":13066,"text":"[2] J, VanderPlas, Python Data Science Handbook: Essential Tools for Working with Data (2016), https://amzn.to/2SMdZue"},{"code":null,"e":13370,"s":13185,"text":"[3] Scikit-learn Developers, sklearn.data sets.make_circles (2019), https://scikit-learn.org/stable/modules/generated/sklearn.data sets.make_circles.html#sklearn.data sets.make_circles"},{"code":null,"e":13511,"s":13370,"text":"[4] A, Zheng, et al, Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists (2018), https://amzn.to/2SOFh3q"}],"string":"[\n {\n \"code\": null,\n \"e\": 659,\n \"s\": 172,\n \"text\": \"Normally, we face data sets that are fairly linear or can be manipulated into one. But what if the data set that we are examining really should be looked at in a nonlinear way? Step into the world of nonlinear feature engineering. First, we’ll look at examples of nonlinear data. Next, we’ll briefly discuss the K-means algorithm as a means to nonlinear feature engineering. Lastly, we’ll apply K-means stacked on top of logistic regression to build a superior model for classification.\"\n },\n {\n \"code\": null,\n \"e\": 1036,\n \"s\": 659,\n \"text\": \"Nonlinear data occurs quite often in the business world. Examples include, segmenting group behavior (marketing), patterns in inventory by group activity (sales), anomaly detection from previous transactions (finance), etc. [1]. To a more concrete example (supply chain / logistics), we can even see it in a visualization of truck driver data of speeding against distance [1]:\"\n },\n {\n \"code\": null,\n \"e\": 1374,\n \"s\": 1036,\n \"text\": \"From a quick glance, we can see that there are at least 2 groups within this data set. A group split between above 100 distance and below 100 distance. Intuitively, we can see that fitting a linear model here would be horrendous. Thus, we need a different type model. Applying K-means, we can actually find four groups as seen below [1]:\"\n },\n {\n \"code\": null,\n \"e\": 1699,\n \"s\": 1374,\n \"text\": \"With K-means, we can now assign additional analysis on the above drivers’ data set to produce predictive insights to help businesses categorize drivers’ distance traveled and their speeding patterns. In our case, we’ll apply K-means to our own fictitious data set to save us more steps of feature engineering real life data.\"\n },\n {\n \"code\": null,\n \"e\": 2133,\n \"s\": 1699,\n \"text\": \"Before we begin constructing our data, let’s take some time to go over what K-means actually is. K-means is an algorithm that looks for a certain number of clusters within an unlabeled data set [2]. Take note of the word unlabeled. This means that K-means is an unsupervised learning model. This is super helpful, when you get data but don’t really know how to label it. K-means can help out by labeling groups for you — pretty cool!\"\n },\n {\n \"code\": null,\n \"e\": 2241,\n \"s\": 2133,\n \"text\": \"For our data, we’ll use the make_circles data from sklearn [3]. Alright, let’s get to our hands on example:\"\n },\n {\n \"code\": null,\n \"e\": 2690,\n \"s\": 2241,\n \"text\": \"#Load up our packagesimport pandas as pdimport numpy as npimport sklearnimport scipyimport seaborn as snsfrom sklearn.cluster import KMeansfrom sklearn.preprocessing import OneHotEncoderfrom scipy.spatial import Voronoi, voronoi_plot_2dfrom sklearn.data sets.samples_generator import make_circlesfrom sklearn.linear_model import LogisticRegressionfrom sklearn.neighbors import KNeighborsClassifierimport matplotlib.pyplot as plt%matplotlib notebook\"\n },\n {\n \"code\": null,\n \"e\": 3093,\n \"s\": 2690,\n \"text\": \"Our next step is to use create a K-means class. For those of you unfamiliar with classes (not a subject you take in school), think of a class in coding as a super function that has a lot of functions inside it. Now, I know there’s already a k-means clustering algorithm in sklearn, but I really like this class made by Alice Zheng due to detailed comments and the visualization that we’ll soon see [4]:\"\n },\n {\n \"code\": null,\n \"e\": 7567,\n \"s\": 3093,\n \"text\": \"class KMeansFeaturizer: \\\"\\\"\\\"Transforms numeric data into k-means cluster memberships. This transformer runs k-means on the input data and converts each data point into the id of the closest cluster. If a target variable is present, it is scaled and included as input to k-means in order to derive clusters that obey the classification boundary as well as group similar points together. Parameters ---------- k: integer, optional, default 100 The number of clusters to group data into. target_scale: float, [0, infty], optional, default 5.0 The scaling factor for the target variable. Set this to zero to ignore the target. For classification problems, larger `target_scale` values will produce clusters that better respect the class boundary. random_state : integer or numpy.RandomState, optional This is passed to k-means as the generator used to initialize the kmeans centers. If an integer is given, it fixes the seed. Defaults to the global numpy random number generator. Attributes ---------- cluster_centers_ : array, [k, n_features] Coordinates of cluster centers. n_features does count the target column. \\\"\\\"\\\" def __init__(self, k=100, target_scale=5.0, random_state=None): self.k = k self.target_scale = target_scale self.random_state = random_state self.cluster_encoder = OneHotEncoder().fit(np.array(range(k)).reshape(-1,1)) def fit(self, X, y=None): \\\"\\\"\\\"Runs k-means on the input data and find centroids. If no target is given (`y` is None) then run vanilla k-means on input `X`. If target `y` is given, then include the target (weighted by `target_scale`) as an extra dimension for k-means clustering. In this case, run k-means twice, first with the target, then an extra iteration without. After fitting, the attribute `cluster_centers_` are set to the k-means centroids in the input space represented by `X`. Parameters ---------- X : array-like or sparse matrix, shape=(n_data_points, n_features) y : vector of length n_data_points, optional, default None If provided, will be weighted with `target_scale` and included in k-means clustering as hint. \\\"\\\"\\\" if y is None: # No target variable, just do plain k-means km_model = KMeans(n_clusters=self.k, n_init=20, random_state=self.random_state) km_model.fit(X) self.km_model_ = km_model self.cluster_centers_ = km_model.cluster_centers_ return self # There is target information. Apply appropriate scaling and include # into input data to k-means data_with_target = np.hstack((X, y[:,np.newaxis]*self.target_scale)) # Build a pre-training k-means model on data and target km_model_pretrain = KMeans(n_clusters=self.k, n_init=20, random_state=self.random_state) km_model_pretrain.fit(data_with_target) # Run k-means a second time to get the clusters in the original space # without target info. Initialize using centroids found in pre-training. # Go through a single iteration of cluster assignment and centroid # recomputation. km_model = KMeans(n_clusters=self.k, init=km_model_pretrain.cluster_centers_[:,:2], n_init=1, max_iter=1) km_model.fit(X) self.km_model = km_model self.cluster_centers_ = km_model.cluster_centers_ return self def transform(self, X, y=None): \\\"\\\"\\\"Outputs the closest cluster id for each input data point. Parameters ---------- X : array-like or sparse matrix, shape=(n_data_points, n_features) y : vector of length n_data_points, optional, default None Target vector is ignored even if provided. Returns ------- cluster_ids : array, shape[n_data_points,1] \\\"\\\"\\\" clusters = self.km_model.predict(X) return self.cluster_encoder.transform(clusters.reshape(-1,1)) def fit_transform(self, X, y=None): \\\"\\\"\\\"Runs fit followed by transform. \\\"\\\"\\\" self.fit(X, y) return self.transform(X, y)\"\n },\n {\n \"code\": null,\n \"e\": 7793,\n \"s\": 7567,\n \"text\": \"Don’t let that huge amount of text bother you. I just put it there incase you wanted to experiment with it on your own projects. Afterwards, we’ll create our training/test set, and set the seed to 420 to get the same results:\"\n },\n {\n \"code\": null,\n \"e\": 8401,\n \"s\": 7793,\n \"text\": \"# Creating our training and test setseed = 420training_data, training_labels = make_circles(n_samples=2000, factor=0.2)kmf_hint = KMeansFeaturizer(k=100, target_scale=10, random_state=seed).fit(training_data, training_labels)kmf_no_hint = KMeansFeaturizer(k=100, target_scale=0, random_state=seed).fit(training_data, training_labels)def kmeans_voronoi_plot(X, y, cluster_centers, ax): #Plots Voronoi diagram of k-means clusters overlaid with data ax.scatter(X[:, 0], X[:, 1], c=y, cmap='Set1', alpha=0.2) vor = Voronoi(cluster_centers) voronoi_plot_2d(vor, ax=ax, show_vertices=False, alpha=0.5)\"\n },\n {\n \"code\": null,\n \"e\": 8450,\n \"s\": 8401,\n \"text\": \"Now, let’s look at our unlabeled nonlinear data:\"\n },\n {\n \"code\": null,\n \"e\": 8546,\n \"s\": 8450,\n \"text\": \"#looking at circles datadf = pd.DataFrame(training_data)ax = sns.scatterplot(x=0, y=1, data=df)\"\n },\n {\n \"code\": null,\n \"e\": 8759,\n \"s\": 8546,\n \"text\": \"Just like our into data set of the drivers’, our circle within a circle is definitely not a linear data set. Next, we’ll apply K-means comparing visual results with giving it a hint on what we think and no hints:\"\n },\n {\n \"code\": null,\n \"e\": 9128,\n \"s\": 8759,\n \"text\": \"#With hintfig = plt.figure()ax = plt.subplot(211, aspect='equal')kmeans_voronoi_plot(training_data, training_labels, kmf_hint.cluster_centers_, ax)ax.set_title('K-Means with Target Hint')#Without hintax2 = plt.subplot(212, aspect='equal')kmeans_voronoi_plot(training_data, training_labels, kmf_no_hint.cluster_centers_, ax2)ax2.set_title('K-Means without Target Hint')\"\n },\n {\n \"code\": null,\n \"e\": 9600,\n \"s\": 9128,\n \"text\": \"I find that in hint versus no hint that the results are fairly close. If you want more automation, then you might want to apply no hint. But if you can spend some time looking at your data set to give it a hint, I would. The reason is it could save you some time in running the model, so k-means spends less time figuring out on its own. Another reason to give k-means a hint is you have domain expertise in your data set and know there are a specific number of clusters.\"\n },\n {\n \"code\": null,\n \"e\": 10183,\n \"s\": 9600,\n \"text\": \"Time for the fun part — making the stacked model. Some of you might be asking, what’s the difference between stacked model and ensemble model. An ensemble model combines multiple machine learning models to make another model [5]. So, not much. I think model stacking is more precise here, since k-means is feeding into logistic regression. If we could draw a Venn diagram, we would find stacked models inside the concept of ensemble model. I couldn’t find a good example on Google images, so I applied the magic of MS paint to present a rough illustration for your viewing pleasure:\"\n },\n {\n \"code\": null,\n \"e\": 10332,\n \"s\": 10183,\n \"text\": \"Ok, art class over and back to coding. We’re going to do a ROC curve of kNN, logistic regression (LR), and k-means feeding into logistic regression.\"\n },\n {\n \"code\": null,\n \"e\": 11801,\n \"s\": 10332,\n \"text\": \"#Generate test data from same distribution of training datatest_data, test_labels = make_moons(n_samples=2000, noise=0.3, random_state=seed+5)training_cluster_features = kmf_hint.transform(training_data)test_cluster_features = kmf_hint.transform(test_data)training_with_cluster = scipy.sparse.hstack((training_data, training_cluster_features))test_with_cluster = scipy.sparse.hstack((test_data, test_cluster_features))#Run the modelslr_cluster = LogisticRegression(random_state=seed).fit(training_with_cluster, training_labels)classifier_names = ['LR', 'kNN']classifiers = [LogisticRegression(random_state=seed), KNeighborsClassifier(5)]for model in classifiers: model.fit(training_data, training_labels) #Plot the ROCdef test_roc(model, data, labels): if hasattr(model, \\\"decision_function\\\"): predictions = model.decision_function(data) else: predictions = model.predict_proba(data)[:,1] fpr, tpr, _ = sklearn.metrics.roc_curve(labels, predictions) return fpr, tprplt.figure()fpr_cluster, tpr_cluster = test_roc(lr_cluster, test_with_cluster, test_labels)plt.plot(fpr_cluster, tpr_cluster, 'r-', label='LR with k-means')for i, model in enumerate(classifiers): fpr, tpr = test_roc(model, test_data, test_labels) plt.plot(fpr, tpr, label=classifier_names[i]) plt.plot([0, 1], [0, 1], 'k--')plt.legend()plt.xlabel('False Positive Rate', fontsize=14)plt.ylabel('True Positive Rate', fontsize=14)\"\n },\n {\n \"code\": null,\n \"e\": 12177,\n \"s\": 11801,\n \"text\": \"Alright, first time I saw a ROC curve, I was like how do I read this thing? Well, what you want is the model that shoots up to the top left corner the fastest. In this case, our most accurate model is the stacked model — linear regression with k-means. The classification that our models worked on is picking where, which data point belongs to the big circle or small circle.\"\n },\n {\n \"code\": null,\n \"e\": 12830,\n \"s\": 12177,\n \"text\": \"Phew, we covered quite a few things here. First, we look at nonlinear data and examples that we might face in the real world. Second, we looked over k-means as a tool to discover more features about our data that was not there before. Next, we applied k-means to our own data set. Lastly, we stacked k-means into logistic regression to make a superior model. Pretty cool stuff overall. Somethings to note, we didn’t tune the models, which would change the performance nor did we compare that many models. But combining unsupervised learning into your supervised models could prove pretty useful and help you deliver insights you couldn’t get otherwise!\"\n },\n {\n \"code\": null,\n \"e\": 12955,\n \"s\": 12830,\n \"text\": \"Disclaimer: All things stated in this article are of my own opinion and not of any employer. Also sprinkled affiliate links.\"\n },\n {\n \"code\": null,\n \"e\": 13066,\n \"s\": 12955,\n \"text\": \"[1] A, Trevino, Introduction to K-means Clustering (2016), https://www.datascience.com/blog/k-means-clustering\"\n },\n {\n \"code\": null,\n \"e\": 13185,\n \"s\": 13066,\n \"text\": \"[2] J, VanderPlas, Python Data Science Handbook: Essential Tools for Working with Data (2016), https://amzn.to/2SMdZue\"\n },\n {\n \"code\": null,\n \"e\": 13370,\n \"s\": 13185,\n \"text\": \"[3] Scikit-learn Developers, sklearn.data sets.make_circles (2019), https://scikit-learn.org/stable/modules/generated/sklearn.data sets.make_circles.html#sklearn.data sets.make_circles\"\n },\n {\n \"code\": null,\n \"e\": 13511,\n \"s\": 13370,\n \"text\": \"[4] A, Zheng, et al, Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists (2018), https://amzn.to/2SOFh3q\"\n }\n]"}}},{"rowIdx":555,"cells":{"title":{"kind":"string","value":"Arrays and Strings in C++ - GeeksforGeeks"},"text":{"kind":"string","value":"07 May, 2020\nArrays\nAn array in C or C++ is a collection of items stored at contiguous memory locations and elements can be accessed randomly using indices of an array. They are used to store similar types of elements as in the data type must be the same for all elements. They can be used to store the collection of primitive data types such as int, float, double, char, etc of any particular type. To add to it, an array in C or C++ can store derived data types such as the structures, pointers, etc.There are two types of arrays:\nOne Dimensional Array\nMulti Dimensional Array\nOne Dimensional Array: A one dimensional array is a collection of same data types. 1-D array is declared as:\ndata_type variable_name[size]\n\ndata_type is the type of array, like int, float, char, etc.\nvariable_name is the name of the array.\nsize is the length of the array which is fixed.\n\nNote: The location of the array elements depends upon the data type we use.\nBelow is the illustration of the array:\nBelow is the program to illustrate the traversal of the array:\n// C++ program to illustrate the traversal// of the array#include \"iostream\"using namespace std; // Function to illustrate traversal in arr[]void traverseArray(int arr[], int N){ // Iterate from [1, N-1] and print // the element at that index for (int i = 0; i < N; i++) { cout << arr[i] << ' '; }} // Driver Codeint main(){ // Given array int arr[] = { 1, 2, 3, 4 }; // Size of the array int N = sizeof(arr) / sizeof(arr[0]); // Function call traverseArray(arr, N);}\n1 2 3 4\n\nMultiDimensional Array: A multidimensional array is also known as array of arrays. Generally, we use a two-dimensional array. It is also known as the matrix. We use two indices to traverse the rows and columns of the 2D array. It is declared as:\ndata_type variable_name[N][M]\n\ndata_type is the type of array, like int, float, char, etc.\nvariable_name is the name of the array.\nN is the number of rows.\nM is the number of columns.\n\nBelow is the program to illustrate the traversal of the 2D array:\n// C++ program to illustrate the traversal// of the 2D array#include \"iostream\"using namespace std; const int N = 2;const int M = 2; // Function to illustrate traversal in arr[][]void traverse2DArray(int arr[][M], int N){ // Iterate from [1, N-1] and print // the element at that index for (int i = 0; i < N; i++) { for (int j = 0; j < M; j++) { cout << arr[i][j] << ' '; } cout << endl; }} // Driver Codeint main(){ // Given array int arr[][M] = { { 1, 2 }, { 3, 4 } }; // Function call traverse2DArray(arr, N); return 0;}\n1 2 \n3 4\n\nStrings\nC++ string class internally uses character array to store character but all memory management, allocation, and null termination are handled by string class itself that is why it is easy to use. For example it is declared as:\nchar str[] = \"GeeksforGeeks\"\n\nBelow is the program to illustrate the traversal in the string:\n// C++ program to illustrate the// traversal of string#include \"iostream\"using namespace std; // Function to illustrate traversal// in stringvoid traverseString(char str[]){ int i = 0; // Iterate till we found '\\0' while (str[i] != '\\0') { printf(\"%c \", str[i]); i++; }} // Driver Codeint main(){ // Given string char str[] = \"GeekforGeeks\"; // Function call traverseString(str); return 0;}\nG e e k f o r G e e k s\n\nThe string data_type in C++ provides various functionality of string manipulation. They are:\nstrcpy(): It is used to copy characters from one string to another string.strcat(): It is used to add the two given strings.strlen(): It is used to find the length of the given string.strcmp(): It is used to compare the two given string.\nstrcpy(): It is used to copy characters from one string to another string.\nstrcat(): It is used to add the two given strings.\nstrlen(): It is used to find the length of the given string.\nstrcmp(): It is used to compare the two given string.\nBelow is the program to illustrate the above functions:\n// C++ program to illustrate functions// of string manipulation#include \"iostream\"#include \"string.h\"using namespace std; // Driver Codeint main(){ // Given two string char str1[100] = \"GeekforGeeks\"; char str2[100] = \"HelloGeek\"; // To get the length of the string // use strlen() function int x = strlen(str1); cout << \"Length of string \" << str1 << \" is \" << x << endl; cout << endl; // To compare the two string str1 // and str2 use strcmp() function int result = strcmp(str1, str2); // If result is 0 then str1 and str2 // are equals if (result == 0) { cout << \"String \" << str1 << \" and String \" << str2 << \" are equal.\" << endl; } else { cout << \"String \" << str1 << \" and String \" << str2 << \" are not equal.\" << endl; } cout << endl; cout << \"String str1 before: \" << str1 << endl; // Use strcpy() to copy character // from one string to another strcpy(str1, str2); cout << \"String str1 after: \" << str1 << endl; cout << endl; return 0;}\nLength of string GeekforGeeks is 12\n\nString GeekforGeeks and String HelloGeek are not equal.\n\nString str1 before: GeekforGeeks\nString str1 after: HelloGeek\n\nArrays\nC++\nStrings\nArrays\nStrings\nCPP\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nComments\nOld Comments\nTrapping Rain Water\nProgram to find sum of elements in a given array\nReversal algorithm for array rotation\nWindow Sliding Technique\nFind duplicates in O(n) time and O(1) extra space | Set 1\nVector in C++ STL\nInheritance in C++\nIterators in C++ STL\nInitialize a vector in C++ (6 different ways)\nSocket Programming in C/C++"},"parsed":{"kind":"list like","value":[{"code":null,"e":24740,"s":24712,"text":"\n07 May, 2020"},{"code":null,"e":24747,"s":24740,"text":"Arrays"},{"code":null,"e":25260,"s":24747,"text":"An array in C or C++ is a collection of items stored at contiguous memory locations and elements can be accessed randomly using indices of an array. They are used to store similar types of elements as in the data type must be the same for all elements. They can be used to store the collection of primitive data types such as int, float, double, char, etc of any particular type. To add to it, an array in C or C++ can store derived data types such as the structures, pointers, etc.There are two types of arrays:"},{"code":null,"e":25282,"s":25260,"text":"One Dimensional Array"},{"code":null,"e":25306,"s":25282,"text":"Multi Dimensional Array"},{"code":null,"e":25415,"s":25306,"text":"One Dimensional Array: A one dimensional array is a collection of same data types. 1-D array is declared as:"},{"code":null,"e":25595,"s":25415,"text":"data_type variable_name[size]\n\ndata_type is the type of array, like int, float, char, etc.\nvariable_name is the name of the array.\nsize is the length of the array which is fixed.\n"},{"code":null,"e":25671,"s":25595,"text":"Note: The location of the array elements depends upon the data type we use."},{"code":null,"e":25711,"s":25671,"text":"Below is the illustration of the array:"},{"code":null,"e":25774,"s":25711,"text":"Below is the program to illustrate the traversal of the array:"},{"code":"// C++ program to illustrate the traversal// of the array#include \"iostream\"using namespace std; // Function to illustrate traversal in arr[]void traverseArray(int arr[], int N){ // Iterate from [1, N-1] and print // the element at that index for (int i = 0; i < N; i++) { cout << arr[i] << ' '; }} // Driver Codeint main(){ // Given array int arr[] = { 1, 2, 3, 4 }; // Size of the array int N = sizeof(arr) / sizeof(arr[0]); // Function call traverseArray(arr, N);}","e":26289,"s":25774,"text":null},{"code":null,"e":26298,"s":26289,"text":"1 2 3 4\n"},{"code":null,"e":26544,"s":26298,"text":"MultiDimensional Array: A multidimensional array is also known as array of arrays. Generally, we use a two-dimensional array. It is also known as the matrix. We use two indices to traverse the rows and columns of the 2D array. It is declared as:"},{"code":null,"e":26729,"s":26544,"text":"data_type variable_name[N][M]\n\ndata_type is the type of array, like int, float, char, etc.\nvariable_name is the name of the array.\nN is the number of rows.\nM is the number of columns.\n"},{"code":null,"e":26795,"s":26729,"text":"Below is the program to illustrate the traversal of the 2D array:"},{"code":"// C++ program to illustrate the traversal// of the 2D array#include \"iostream\"using namespace std; const int N = 2;const int M = 2; // Function to illustrate traversal in arr[][]void traverse2DArray(int arr[][M], int N){ // Iterate from [1, N-1] and print // the element at that index for (int i = 0; i < N; i++) { for (int j = 0; j < M; j++) { cout << arr[i][j] << ' '; } cout << endl; }} // Driver Codeint main(){ // Given array int arr[][M] = { { 1, 2 }, { 3, 4 } }; // Function call traverse2DArray(arr, N); return 0;}","e":27389,"s":26795,"text":null},{"code":null,"e":27399,"s":27389,"text":"1 2 \n3 4\n"},{"code":null,"e":27407,"s":27399,"text":"Strings"},{"code":null,"e":27632,"s":27407,"text":"C++ string class internally uses character array to store character but all memory management, allocation, and null termination are handled by string class itself that is why it is easy to use. For example it is declared as:"},{"code":null,"e":27662,"s":27632,"text":"char str[] = \"GeeksforGeeks\"\n"},{"code":null,"e":27726,"s":27662,"text":"Below is the program to illustrate the traversal in the string:"},{"code":"// C++ program to illustrate the// traversal of string#include \"iostream\"using namespace std; // Function to illustrate traversal// in stringvoid traverseString(char str[]){ int i = 0; // Iterate till we found '\\0' while (str[i] != '\\0') { printf(\"%c \", str[i]); i++; }} // Driver Codeint main(){ // Given string char str[] = \"GeekforGeeks\"; // Function call traverseString(str); return 0;}","e":28168,"s":27726,"text":null},{"code":null,"e":28193,"s":28168,"text":"G e e k f o r G e e k s\n"},{"code":null,"e":28286,"s":28193,"text":"The string data_type in C++ provides various functionality of string manipulation. They are:"},{"code":null,"e":28524,"s":28286,"text":"strcpy(): It is used to copy characters from one string to another string.strcat(): It is used to add the two given strings.strlen(): It is used to find the length of the given string.strcmp(): It is used to compare the two given string."},{"code":null,"e":28599,"s":28524,"text":"strcpy(): It is used to copy characters from one string to another string."},{"code":null,"e":28650,"s":28599,"text":"strcat(): It is used to add the two given strings."},{"code":null,"e":28711,"s":28650,"text":"strlen(): It is used to find the length of the given string."},{"code":null,"e":28765,"s":28711,"text":"strcmp(): It is used to compare the two given string."},{"code":null,"e":28821,"s":28765,"text":"Below is the program to illustrate the above functions:"},{"code":"// C++ program to illustrate functions// of string manipulation#include \"iostream\"#include \"string.h\"using namespace std; // Driver Codeint main(){ // Given two string char str1[100] = \"GeekforGeeks\"; char str2[100] = \"HelloGeek\"; // To get the length of the string // use strlen() function int x = strlen(str1); cout << \"Length of string \" << str1 << \" is \" << x << endl; cout << endl; // To compare the two string str1 // and str2 use strcmp() function int result = strcmp(str1, str2); // If result is 0 then str1 and str2 // are equals if (result == 0) { cout << \"String \" << str1 << \" and String \" << str2 << \" are equal.\" << endl; } else { cout << \"String \" << str1 << \" and String \" << str2 << \" are not equal.\" << endl; } cout << endl; cout << \"String str1 before: \" << str1 << endl; // Use strcpy() to copy character // from one string to another strcpy(str1, str2); cout << \"String str1 after: \" << str1 << endl; cout << endl; return 0;}","e":29956,"s":28821,"text":null},{"code":null,"e":30113,"s":29956,"text":"Length of string GeekforGeeks is 12\n\nString GeekforGeeks and String HelloGeek are not equal.\n\nString str1 before: GeekforGeeks\nString str1 after: HelloGeek\n"},{"code":null,"e":30120,"s":30113,"text":"Arrays"},{"code":null,"e":30124,"s":30120,"text":"C++"},{"code":null,"e":30132,"s":30124,"text":"Strings"},{"code":null,"e":30139,"s":30132,"text":"Arrays"},{"code":null,"e":30147,"s":30139,"text":"Strings"},{"code":null,"e":30151,"s":30147,"text":"CPP"},{"code":null,"e":30249,"s":30151,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":30258,"s":30249,"text":"Comments"},{"code":null,"e":30271,"s":30258,"text":"Old Comments"},{"code":null,"e":30291,"s":30271,"text":"Trapping Rain Water"},{"code":null,"e":30340,"s":30291,"text":"Program to find sum of elements in a given array"},{"code":null,"e":30378,"s":30340,"text":"Reversal algorithm for array rotation"},{"code":null,"e":30403,"s":30378,"text":"Window Sliding Technique"},{"code":null,"e":30461,"s":30403,"text":"Find duplicates in O(n) time and O(1) extra space | Set 1"},{"code":null,"e":30479,"s":30461,"text":"Vector in C++ STL"},{"code":null,"e":30498,"s":30479,"text":"Inheritance in C++"},{"code":null,"e":30519,"s":30498,"text":"Iterators in C++ STL"},{"code":null,"e":30565,"s":30519,"text":"Initialize a vector in C++ (6 different ways)"}],"string":"[\n {\n \"code\": null,\n \"e\": 24740,\n \"s\": 24712,\n \"text\": \"\\n07 May, 2020\"\n },\n {\n \"code\": null,\n \"e\": 24747,\n \"s\": 24740,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 25260,\n \"s\": 24747,\n \"text\": \"An array in C or C++ is a collection of items stored at contiguous memory locations and elements can be accessed randomly using indices of an array. They are used to store similar types of elements as in the data type must be the same for all elements. They can be used to store the collection of primitive data types such as int, float, double, char, etc of any particular type. To add to it, an array in C or C++ can store derived data types such as the structures, pointers, etc.There are two types of arrays:\"\n },\n {\n \"code\": null,\n \"e\": 25282,\n \"s\": 25260,\n \"text\": \"One Dimensional Array\"\n },\n {\n \"code\": null,\n \"e\": 25306,\n \"s\": 25282,\n \"text\": \"Multi Dimensional Array\"\n },\n {\n \"code\": null,\n \"e\": 25415,\n \"s\": 25306,\n \"text\": \"One Dimensional Array: A one dimensional array is a collection of same data types. 1-D array is declared as:\"\n },\n {\n \"code\": null,\n \"e\": 25595,\n \"s\": 25415,\n \"text\": \"data_type variable_name[size]\\n\\ndata_type is the type of array, like int, float, char, etc.\\nvariable_name is the name of the array.\\nsize is the length of the array which is fixed.\\n\"\n },\n {\n \"code\": null,\n \"e\": 25671,\n \"s\": 25595,\n \"text\": \"Note: The location of the array elements depends upon the data type we use.\"\n },\n {\n \"code\": null,\n \"e\": 25711,\n \"s\": 25671,\n \"text\": \"Below is the illustration of the array:\"\n },\n {\n \"code\": null,\n \"e\": 25774,\n \"s\": 25711,\n \"text\": \"Below is the program to illustrate the traversal of the array:\"\n },\n {\n \"code\": \"// C++ program to illustrate the traversal// of the array#include \\\"iostream\\\"using namespace std; // Function to illustrate traversal in arr[]void traverseArray(int arr[], int N){ // Iterate from [1, N-1] and print // the element at that index for (int i = 0; i < N; i++) { cout << arr[i] << ' '; }} // Driver Codeint main(){ // Given array int arr[] = { 1, 2, 3, 4 }; // Size of the array int N = sizeof(arr) / sizeof(arr[0]); // Function call traverseArray(arr, N);}\",\n \"e\": 26289,\n \"s\": 25774,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26298,\n \"s\": 26289,\n \"text\": \"1 2 3 4\\n\"\n },\n {\n \"code\": null,\n \"e\": 26544,\n \"s\": 26298,\n \"text\": \"MultiDimensional Array: A multidimensional array is also known as array of arrays. Generally, we use a two-dimensional array. It is also known as the matrix. We use two indices to traverse the rows and columns of the 2D array. It is declared as:\"\n },\n {\n \"code\": null,\n \"e\": 26729,\n \"s\": 26544,\n \"text\": \"data_type variable_name[N][M]\\n\\ndata_type is the type of array, like int, float, char, etc.\\nvariable_name is the name of the array.\\nN is the number of rows.\\nM is the number of columns.\\n\"\n },\n {\n \"code\": null,\n \"e\": 26795,\n \"s\": 26729,\n \"text\": \"Below is the program to illustrate the traversal of the 2D array:\"\n },\n {\n \"code\": \"// C++ program to illustrate the traversal// of the 2D array#include \\\"iostream\\\"using namespace std; const int N = 2;const int M = 2; // Function to illustrate traversal in arr[][]void traverse2DArray(int arr[][M], int N){ // Iterate from [1, N-1] and print // the element at that index for (int i = 0; i < N; i++) { for (int j = 0; j < M; j++) { cout << arr[i][j] << ' '; } cout << endl; }} // Driver Codeint main(){ // Given array int arr[][M] = { { 1, 2 }, { 3, 4 } }; // Function call traverse2DArray(arr, N); return 0;}\",\n \"e\": 27389,\n \"s\": 26795,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27399,\n \"s\": 27389,\n \"text\": \"1 2 \\n3 4\\n\"\n },\n {\n \"code\": null,\n \"e\": 27407,\n \"s\": 27399,\n \"text\": \"Strings\"\n },\n {\n \"code\": null,\n \"e\": 27632,\n \"s\": 27407,\n \"text\": \"C++ string class internally uses character array to store character but all memory management, allocation, and null termination are handled by string class itself that is why it is easy to use. For example it is declared as:\"\n },\n {\n \"code\": null,\n \"e\": 27662,\n \"s\": 27632,\n \"text\": \"char str[] = \\\"GeeksforGeeks\\\"\\n\"\n },\n {\n \"code\": null,\n \"e\": 27726,\n \"s\": 27662,\n \"text\": \"Below is the program to illustrate the traversal in the string:\"\n },\n {\n \"code\": \"// C++ program to illustrate the// traversal of string#include \\\"iostream\\\"using namespace std; // Function to illustrate traversal// in stringvoid traverseString(char str[]){ int i = 0; // Iterate till we found '\\\\0' while (str[i] != '\\\\0') { printf(\\\"%c \\\", str[i]); i++; }} // Driver Codeint main(){ // Given string char str[] = \\\"GeekforGeeks\\\"; // Function call traverseString(str); return 0;}\",\n \"e\": 28168,\n \"s\": 27726,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 28193,\n \"s\": 28168,\n \"text\": \"G e e k f o r G e e k s\\n\"\n },\n {\n \"code\": null,\n \"e\": 28286,\n \"s\": 28193,\n \"text\": \"The string data_type in C++ provides various functionality of string manipulation. They are:\"\n },\n {\n \"code\": null,\n \"e\": 28524,\n \"s\": 28286,\n \"text\": \"strcpy(): It is used to copy characters from one string to another string.strcat(): It is used to add the two given strings.strlen(): It is used to find the length of the given string.strcmp(): It is used to compare the two given string.\"\n },\n {\n \"code\": null,\n \"e\": 28599,\n \"s\": 28524,\n \"text\": \"strcpy(): It is used to copy characters from one string to another string.\"\n },\n {\n \"code\": null,\n \"e\": 28650,\n \"s\": 28599,\n \"text\": \"strcat(): It is used to add the two given strings.\"\n },\n {\n \"code\": null,\n \"e\": 28711,\n \"s\": 28650,\n \"text\": \"strlen(): It is used to find the length of the given string.\"\n },\n {\n \"code\": null,\n \"e\": 28765,\n \"s\": 28711,\n \"text\": \"strcmp(): It is used to compare the two given string.\"\n },\n {\n \"code\": null,\n \"e\": 28821,\n \"s\": 28765,\n \"text\": \"Below is the program to illustrate the above functions:\"\n },\n {\n \"code\": \"// C++ program to illustrate functions// of string manipulation#include \\\"iostream\\\"#include \\\"string.h\\\"using namespace std; // Driver Codeint main(){ // Given two string char str1[100] = \\\"GeekforGeeks\\\"; char str2[100] = \\\"HelloGeek\\\"; // To get the length of the string // use strlen() function int x = strlen(str1); cout << \\\"Length of string \\\" << str1 << \\\" is \\\" << x << endl; cout << endl; // To compare the two string str1 // and str2 use strcmp() function int result = strcmp(str1, str2); // If result is 0 then str1 and str2 // are equals if (result == 0) { cout << \\\"String \\\" << str1 << \\\" and String \\\" << str2 << \\\" are equal.\\\" << endl; } else { cout << \\\"String \\\" << str1 << \\\" and String \\\" << str2 << \\\" are not equal.\\\" << endl; } cout << endl; cout << \\\"String str1 before: \\\" << str1 << endl; // Use strcpy() to copy character // from one string to another strcpy(str1, str2); cout << \\\"String str1 after: \\\" << str1 << endl; cout << endl; return 0;}\",\n \"e\": 29956,\n \"s\": 28821,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 30113,\n \"s\": 29956,\n \"text\": \"Length of string GeekforGeeks is 12\\n\\nString GeekforGeeks and String HelloGeek are not equal.\\n\\nString str1 before: GeekforGeeks\\nString str1 after: HelloGeek\\n\"\n },\n {\n \"code\": null,\n \"e\": 30120,\n \"s\": 30113,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 30124,\n \"s\": 30120,\n \"text\": \"C++\"\n },\n {\n \"code\": null,\n \"e\": 30132,\n \"s\": 30124,\n \"text\": \"Strings\"\n },\n {\n \"code\": null,\n \"e\": 30139,\n \"s\": 30132,\n \"text\": \"Arrays\"\n },\n {\n \"code\": null,\n \"e\": 30147,\n \"s\": 30139,\n \"text\": \"Strings\"\n },\n {\n \"code\": null,\n \"e\": 30151,\n \"s\": 30147,\n \"text\": \"CPP\"\n },\n {\n \"code\": null,\n \"e\": 30249,\n \"s\": 30151,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 30258,\n \"s\": 30249,\n \"text\": \"Comments\"\n },\n {\n \"code\": null,\n \"e\": 30271,\n \"s\": 30258,\n \"text\": \"Old Comments\"\n },\n {\n \"code\": null,\n \"e\": 30291,\n \"s\": 30271,\n \"text\": \"Trapping Rain Water\"\n },\n {\n \"code\": null,\n \"e\": 30340,\n \"s\": 30291,\n \"text\": \"Program to find sum of elements in a given array\"\n },\n {\n \"code\": null,\n \"e\": 30378,\n \"s\": 30340,\n \"text\": \"Reversal algorithm for array rotation\"\n },\n {\n \"code\": null,\n \"e\": 30403,\n \"s\": 30378,\n \"text\": \"Window Sliding Technique\"\n },\n {\n \"code\": null,\n \"e\": 30461,\n \"s\": 30403,\n \"text\": \"Find duplicates in O(n) time and O(1) extra space | Set 1\"\n },\n {\n \"code\": null,\n \"e\": 30479,\n \"s\": 30461,\n \"text\": \"Vector in C++ STL\"\n },\n {\n \"code\": null,\n \"e\": 30498,\n \"s\": 30479,\n \"text\": \"Inheritance in C++\"\n },\n {\n \"code\": null,\n \"e\": 30519,\n \"s\": 30498,\n \"text\": \"Iterators in C++ STL\"\n },\n {\n \"code\": null,\n \"e\": 30565,\n \"s\": 30519,\n \"text\": \"Initialize a vector in C++ (6 different ways)\"\n }\n]"}}},{"rowIdx":556,"cells":{"title":{"kind":"string","value":"How to get radiobutton output in Tkinter?"},"text":{"kind":"string","value":"The radiobutton widget in Tkinter allows the user to make a selection for only one option from a set of given choices. The radiobutton has only two values, either True or False.\nIf we want to get the output to check which option the user has selected, then we can use the get() method. It returns the object that is defined as the variable. We can display the selection in a label widget by casting the integer value in a string object and pass it in the text attributes.\n# Import the required libraries\nfrom tkinter import *\nfrom tkinter import ttk\n\n# Create an instance of tkinter frame or window\nwin = Tk()\n\n# Set the size of the window\nwin.geometry(\"700x350\")\n\n# Define a function to get the output for selected option\ndef selection():\n selected = \"You selected the option \" + str(radio.get())\n label.config(text=selected)\n\nradio = IntVar()\nLabel(text=\"Your Favourite programming language:\", font=('Aerial 11')).pack()\n\n# Define radiobutton for each options\nr1 = Radiobutton(win, text=\"C++\", variable=radio, value=1, command=selection)\n\nr1.pack(anchor=N)\nr2 = Radiobutton(win, text=\"Python\", variable=radio, value=2, command=selection)\n\nr2.pack(anchor=N)\nr3 = Radiobutton(win, text=\"Java\", variable=radio, value=3, command=selection)\n\nr3.pack(anchor=N)\n\n# Define a label widget\nlabel = Label(win)\nlabel.pack()\n\nwin.mainloop()\nExecuting the above code will display a window with a set of radiobuttons in it. Click any option and it will show the option that you have selected."},"parsed":{"kind":"list like","value":[{"code":null,"e":1240,"s":1062,"text":"The radiobutton widget in Tkinter allows the user to make a selection for only one option from a set of given choices. The radiobutton has only two values, either True or False."},{"code":null,"e":1534,"s":1240,"text":"If we want to get the output to check which option the user has selected, then we can use the get() method. It returns the object that is defined as the variable. We can display the selection in a label widget by casting the integer value in a string object and pass it in the text attributes."},{"code":null,"e":2396,"s":1534,"text":"# Import the required libraries\nfrom tkinter import *\nfrom tkinter import ttk\n\n# Create an instance of tkinter frame or window\nwin = Tk()\n\n# Set the size of the window\nwin.geometry(\"700x350\")\n\n# Define a function to get the output for selected option\ndef selection():\n selected = \"You selected the option \" + str(radio.get())\n label.config(text=selected)\n\nradio = IntVar()\nLabel(text=\"Your Favourite programming language:\", font=('Aerial 11')).pack()\n\n# Define radiobutton for each options\nr1 = Radiobutton(win, text=\"C++\", variable=radio, value=1, command=selection)\n\nr1.pack(anchor=N)\nr2 = Radiobutton(win, text=\"Python\", variable=radio, value=2, command=selection)\n\nr2.pack(anchor=N)\nr3 = Radiobutton(win, text=\"Java\", variable=radio, value=3, command=selection)\n\nr3.pack(anchor=N)\n\n# Define a label widget\nlabel = Label(win)\nlabel.pack()\n\nwin.mainloop()"},{"code":null,"e":2546,"s":2396,"text":"Executing the above code will display a window with a set of radiobuttons in it. Click any option and it will show the option that you have selected."}],"string":"[\n {\n \"code\": null,\n \"e\": 1240,\n \"s\": 1062,\n \"text\": \"The radiobutton widget in Tkinter allows the user to make a selection for only one option from a set of given choices. The radiobutton has only two values, either True or False.\"\n },\n {\n \"code\": null,\n \"e\": 1534,\n \"s\": 1240,\n \"text\": \"If we want to get the output to check which option the user has selected, then we can use the get() method. It returns the object that is defined as the variable. We can display the selection in a label widget by casting the integer value in a string object and pass it in the text attributes.\"\n },\n {\n \"code\": null,\n \"e\": 2396,\n \"s\": 1534,\n \"text\": \"# Import the required libraries\\nfrom tkinter import *\\nfrom tkinter import ttk\\n\\n# Create an instance of tkinter frame or window\\nwin = Tk()\\n\\n# Set the size of the window\\nwin.geometry(\\\"700x350\\\")\\n\\n# Define a function to get the output for selected option\\ndef selection():\\n selected = \\\"You selected the option \\\" + str(radio.get())\\n label.config(text=selected)\\n\\nradio = IntVar()\\nLabel(text=\\\"Your Favourite programming language:\\\", font=('Aerial 11')).pack()\\n\\n# Define radiobutton for each options\\nr1 = Radiobutton(win, text=\\\"C++\\\", variable=radio, value=1, command=selection)\\n\\nr1.pack(anchor=N)\\nr2 = Radiobutton(win, text=\\\"Python\\\", variable=radio, value=2, command=selection)\\n\\nr2.pack(anchor=N)\\nr3 = Radiobutton(win, text=\\\"Java\\\", variable=radio, value=3, command=selection)\\n\\nr3.pack(anchor=N)\\n\\n# Define a label widget\\nlabel = Label(win)\\nlabel.pack()\\n\\nwin.mainloop()\"\n },\n {\n \"code\": null,\n \"e\": 2546,\n \"s\": 2396,\n \"text\": \"Executing the above code will display a window with a set of radiobuttons in it. Click any option and it will show the option that you have selected.\"\n }\n]"}}},{"rowIdx":557,"cells":{"title":{"kind":"string","value":"CSS | stroke-linejoin Property - GeeksforGeeks"},"text":{"kind":"string","value":"22 Nov, 2019\nThe stroke-linejoin property is an inbuilt property used to define the shape that is used to end an open sub-path of a stroke.\nSyntax:\nstroke-linejoin: miter | miter-clip | round | bevel | arcs | initial | inherit\nProperty Values:\nmiter: It is used to indicate that a sharp corner would be used to join the two ends. The outer edges of the stroke are extended to the tangents of the path segments until they intersect. This gives the ending a sharp corner.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter;
Output:\nExample:\n CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter;
\nOutput:\nmiter-clip: It is used to indicate that a sharp corner would be used to join the two ends. The outer edges of the stroke are extended to the tangents of the path segments until they intersect.It gives the ending a sharp corner like the miter value except another property. The stroke-miterlimit is used to determine whether the miter would be clipped if it exceeds a certain value. It is used to provide a better-looking miter on very sharp joins or animations.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter-clip;
Output:\nIt gives the ending a sharp corner like the miter value except another property. The stroke-miterlimit is used to determine whether the miter would be clipped if it exceeds a certain value. It is used to provide a better-looking miter on very sharp joins or animations.\nExample:\n CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter-clip;
\nOutput:\nround: It is used to indicate that rounded the corner would be used to join the two ends.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: round;
Output:\nExample:\n CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: round;
\nOutput:\nbevel: It is used to indicate that the connecting point is cropped perpendicular to the joint.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: bevel;
Output:\nExample:\n CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: bevel;
\nOutput:\narcs: It is used to indicate that an arcs corner is to be used to join path segments. This shape is formed by the extension of the outer edges of the stroke having the same curvature as the outer edges at the point they join.\ninitial: It is used to set the property to its default value.Example: CSS | stroke-linejoin

GeeksforGeeks

CSS | stroke-linejoin: initial;
Output:\nExample:\n CSS | stroke-linejoin

GeeksforGeeks

CSS | stroke-linejoin: initial;
\nOutput:\ninherit: It is used to set the property to inherit from its parent.\nSupported Browsers: The browser supported by stroke-linejoin property are listed below:\nChrome\nInternet Explorer 9\nFirefox\nSafari\nOpera\nNote: The stroke-linejoin: arcs; is not supported by any major browsers.\nCSS-Properties\nCSS\nWeb Technologies\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nDesign a web page using HTML and CSS\nForm validation using jQuery\nHow to set space between the flexbox ?\nSearch Bar using HTML, CSS and JavaScript\nHow to Create Time-Table schedule using HTML ?\nRoadmap to Become a Web Developer in 2022\nInstallation of Node.js on Linux\nHow to fetch data from an API in ReactJS ?\nConvert a string to an integer in JavaScript\nDifference between var, let and const keywords in JavaScript"},"parsed":{"kind":"list like","value":[{"code":null,"e":25376,"s":25348,"text":"\n22 Nov, 2019"},{"code":null,"e":25503,"s":25376,"text":"The stroke-linejoin property is an inbuilt property used to define the shape that is used to end an open sub-path of a stroke."},{"code":null,"e":25511,"s":25503,"text":"Syntax:"},{"code":null,"e":25590,"s":25511,"text":"stroke-linejoin: miter | miter-clip | round | bevel | arcs | initial | inherit"},{"code":null,"e":25607,"s":25590,"text":"Property Values:"},{"code":null,"e":26647,"s":25607,"text":"miter: It is used to indicate that a sharp corner would be used to join the two ends. The outer edges of the stroke are extended to the tangents of the path segments until they intersect. This gives the ending a sharp corner.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter;
Output:"},{"code":null,"e":26656,"s":26647,"text":"Example:"},{"code":" CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter;
","e":27456,"s":26656,"text":null},{"code":null,"e":27464,"s":27456,"text":"Output:"},{"code":null,"e":29234,"s":27464,"text":"miter-clip: It is used to indicate that a sharp corner would be used to join the two ends. The outer edges of the stroke are extended to the tangents of the path segments until they intersect.It gives the ending a sharp corner like the miter value except another property. The stroke-miterlimit is used to determine whether the miter would be clipped if it exceeds a certain value. It is used to provide a better-looking miter on very sharp joins or animations.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter-clip;
Output:"},{"code":null,"e":29504,"s":29234,"text":"It gives the ending a sharp corner like the miter value except another property. The stroke-miterlimit is used to determine whether the miter would be clipped if it exceeds a certain value. It is used to provide a better-looking miter on very sharp joins or animations."},{"code":null,"e":29513,"s":29504,"text":"Example:"},{"code":" CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter-clip;
","e":30807,"s":29513,"text":null},{"code":null,"e":30815,"s":30807,"text":"Output:"},{"code":null,"e":31721,"s":30815,"text":"round: It is used to indicate that rounded the corner would be used to join the two ends.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: round;
Output:"},{"code":null,"e":31730,"s":31721,"text":"Example:"},{"code":" CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: round;
","e":32532,"s":31730,"text":null},{"code":null,"e":32540,"s":32532,"text":"Output:"},{"code":null,"e":33416,"s":32540,"text":"bevel: It is used to indicate that the connecting point is cropped perpendicular to the joint.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: bevel;
Output:"},{"code":null,"e":33425,"s":33416,"text":"Example:"},{"code":" CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: bevel;
","e":34192,"s":33425,"text":null},{"code":null,"e":34200,"s":34192,"text":"Output:"},{"code":null,"e":34426,"s":34200,"text":"arcs: It is used to indicate that an arcs corner is to be used to join path segments. This shape is formed by the extension of the outer edges of the stroke having the same curvature as the outer edges at the point they join."},{"code":null,"e":35296,"s":34426,"text":"initial: It is used to set the property to its default value.Example: CSS | stroke-linejoin

GeeksforGeeks

CSS | stroke-linejoin: initial;
Output:"},{"code":null,"e":35305,"s":35296,"text":"Example:"},{"code":" CSS | stroke-linejoin

GeeksforGeeks

CSS | stroke-linejoin: initial;
","e":36099,"s":35305,"text":null},{"code":null,"e":36107,"s":36099,"text":"Output:"},{"code":null,"e":36175,"s":36107,"text":"inherit: It is used to set the property to inherit from its parent."},{"code":null,"e":36263,"s":36175,"text":"Supported Browsers: The browser supported by stroke-linejoin property are listed below:"},{"code":null,"e":36270,"s":36263,"text":"Chrome"},{"code":null,"e":36290,"s":36270,"text":"Internet Explorer 9"},{"code":null,"e":36298,"s":36290,"text":"Firefox"},{"code":null,"e":36305,"s":36298,"text":"Safari"},{"code":null,"e":36311,"s":36305,"text":"Opera"},{"code":null,"e":36384,"s":36311,"text":"Note: The stroke-linejoin: arcs; is not supported by any major browsers."},{"code":null,"e":36399,"s":36384,"text":"CSS-Properties"},{"code":null,"e":36403,"s":36399,"text":"CSS"},{"code":null,"e":36420,"s":36403,"text":"Web Technologies"},{"code":null,"e":36518,"s":36420,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":36555,"s":36518,"text":"Design a web page using HTML and CSS"},{"code":null,"e":36584,"s":36555,"text":"Form validation using jQuery"},{"code":null,"e":36623,"s":36584,"text":"How to set space between the flexbox ?"},{"code":null,"e":36665,"s":36623,"text":"Search Bar using HTML, CSS and JavaScript"},{"code":null,"e":36712,"s":36665,"text":"How to Create Time-Table schedule using HTML ?"},{"code":null,"e":36754,"s":36712,"text":"Roadmap to Become a Web Developer in 2022"},{"code":null,"e":36787,"s":36754,"text":"Installation of Node.js on Linux"},{"code":null,"e":36830,"s":36787,"text":"How to fetch data from an API in ReactJS ?"},{"code":null,"e":36875,"s":36830,"text":"Convert a string to an integer in JavaScript"}],"string":"[\n {\n \"code\": null,\n \"e\": 25376,\n \"s\": 25348,\n \"text\": \"\\n22 Nov, 2019\"\n },\n {\n \"code\": null,\n \"e\": 25503,\n \"s\": 25376,\n \"text\": \"The stroke-linejoin property is an inbuilt property used to define the shape that is used to end an open sub-path of a stroke.\"\n },\n {\n \"code\": null,\n \"e\": 25511,\n \"s\": 25503,\n \"text\": \"Syntax:\"\n },\n {\n \"code\": null,\n \"e\": 25590,\n \"s\": 25511,\n \"text\": \"stroke-linejoin: miter | miter-clip | round | bevel | arcs | initial | inherit\"\n },\n {\n \"code\": null,\n \"e\": 25607,\n \"s\": 25590,\n \"text\": \"Property Values:\"\n },\n {\n \"code\": null,\n \"e\": 26647,\n \"s\": 25607,\n \"text\": \"miter: It is used to indicate that a sharp corner would be used to join the two ends. The outer edges of the stroke are extended to the tangents of the path segments until they intersect. This gives the ending a sharp corner.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter;
Output:\"\n },\n {\n \"code\": null,\n \"e\": 26656,\n \"s\": 26647,\n \"text\": \"Example:\"\n },\n {\n \"code\": \" CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter;
\",\n \"e\": 27456,\n \"s\": 26656,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27464,\n \"s\": 27456,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 29234,\n \"s\": 27464,\n \"text\": \"miter-clip: It is used to indicate that a sharp corner would be used to join the two ends. The outer edges of the stroke are extended to the tangents of the path segments until they intersect.It gives the ending a sharp corner like the miter value except another property. The stroke-miterlimit is used to determine whether the miter would be clipped if it exceeds a certain value. It is used to provide a better-looking miter on very sharp joins or animations.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter-clip;
Output:\"\n },\n {\n \"code\": null,\n \"e\": 29504,\n \"s\": 29234,\n \"text\": \"It gives the ending a sharp corner like the miter value except another property. The stroke-miterlimit is used to determine whether the miter would be clipped if it exceeds a certain value. It is used to provide a better-looking miter on very sharp joins or animations.\"\n },\n {\n \"code\": null,\n \"e\": 29513,\n \"s\": 29504,\n \"text\": \"Example:\"\n },\n {\n \"code\": \" CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: miter-clip;
\",\n \"e\": 30807,\n \"s\": 29513,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 30815,\n \"s\": 30807,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 31721,\n \"s\": 30815,\n \"text\": \"round: It is used to indicate that rounded the corner would be used to join the two ends.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: round;
Output:\"\n },\n {\n \"code\": null,\n \"e\": 31730,\n \"s\": 31721,\n \"text\": \"Example:\"\n },\n {\n \"code\": \" CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: round;
\",\n \"e\": 32532,\n \"s\": 31730,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 32540,\n \"s\": 32532,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 33416,\n \"s\": 32540,\n \"text\": \"bevel: It is used to indicate that the connecting point is cropped perpendicular to the joint.Example: CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: bevel;
Output:\"\n },\n {\n \"code\": null,\n \"e\": 33425,\n \"s\": 33416,\n \"text\": \"Example:\"\n },\n {\n \"code\": \" CSS | stroke-linejoin property

GeeksforGeeks

CSS | stroke-linejoin: bevel;
\",\n \"e\": 34192,\n \"s\": 33425,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 34200,\n \"s\": 34192,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 34426,\n \"s\": 34200,\n \"text\": \"arcs: It is used to indicate that an arcs corner is to be used to join path segments. This shape is formed by the extension of the outer edges of the stroke having the same curvature as the outer edges at the point they join.\"\n },\n {\n \"code\": null,\n \"e\": 35296,\n \"s\": 34426,\n \"text\": \"initial: It is used to set the property to its default value.Example: CSS | stroke-linejoin

GeeksforGeeks

CSS | stroke-linejoin: initial;
Output:\"\n },\n {\n \"code\": null,\n \"e\": 35305,\n \"s\": 35296,\n \"text\": \"Example:\"\n },\n {\n \"code\": \" CSS | stroke-linejoin

GeeksforGeeks

CSS | stroke-linejoin: initial;
\",\n \"e\": 36099,\n \"s\": 35305,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 36107,\n \"s\": 36099,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 36175,\n \"s\": 36107,\n \"text\": \"inherit: It is used to set the property to inherit from its parent.\"\n },\n {\n \"code\": null,\n \"e\": 36263,\n \"s\": 36175,\n \"text\": \"Supported Browsers: The browser supported by stroke-linejoin property are listed below:\"\n },\n {\n \"code\": null,\n \"e\": 36270,\n \"s\": 36263,\n \"text\": \"Chrome\"\n },\n {\n \"code\": null,\n \"e\": 36290,\n \"s\": 36270,\n \"text\": \"Internet Explorer 9\"\n },\n {\n \"code\": null,\n \"e\": 36298,\n \"s\": 36290,\n \"text\": \"Firefox\"\n },\n {\n \"code\": null,\n \"e\": 36305,\n \"s\": 36298,\n \"text\": \"Safari\"\n },\n {\n \"code\": null,\n \"e\": 36311,\n \"s\": 36305,\n \"text\": \"Opera\"\n },\n {\n \"code\": null,\n \"e\": 36384,\n \"s\": 36311,\n \"text\": \"Note: The stroke-linejoin: arcs; is not supported by any major browsers.\"\n },\n {\n \"code\": null,\n \"e\": 36399,\n \"s\": 36384,\n \"text\": \"CSS-Properties\"\n },\n {\n \"code\": null,\n \"e\": 36403,\n \"s\": 36399,\n \"text\": \"CSS\"\n },\n {\n \"code\": null,\n \"e\": 36420,\n \"s\": 36403,\n \"text\": \"Web Technologies\"\n },\n {\n \"code\": null,\n \"e\": 36518,\n \"s\": 36420,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 36555,\n \"s\": 36518,\n \"text\": \"Design a web page using HTML and CSS\"\n },\n {\n \"code\": null,\n \"e\": 36584,\n \"s\": 36555,\n \"text\": \"Form validation using jQuery\"\n },\n {\n \"code\": null,\n \"e\": 36623,\n \"s\": 36584,\n \"text\": \"How to set space between the flexbox ?\"\n },\n {\n \"code\": null,\n \"e\": 36665,\n \"s\": 36623,\n \"text\": \"Search Bar using HTML, CSS and JavaScript\"\n },\n {\n \"code\": null,\n \"e\": 36712,\n \"s\": 36665,\n \"text\": \"How to Create Time-Table schedule using HTML ?\"\n },\n {\n \"code\": null,\n \"e\": 36754,\n \"s\": 36712,\n \"text\": \"Roadmap to Become a Web Developer in 2022\"\n },\n {\n \"code\": null,\n \"e\": 36787,\n \"s\": 36754,\n \"text\": \"Installation of Node.js on Linux\"\n },\n {\n \"code\": null,\n \"e\": 36830,\n \"s\": 36787,\n \"text\": \"How to fetch data from an API in ReactJS ?\"\n },\n {\n \"code\": null,\n \"e\": 36875,\n \"s\": 36830,\n \"text\": \"Convert a string to an integer in JavaScript\"\n }\n]"}}},{"rowIdx":558,"cells":{"title":{"kind":"string","value":"Train a Hand Detector using Tensorflow 2 Object Detection API in 2021 | by Aalok Patwardhan | Towards Data Science"},"text":{"kind":"string","value":"I wanted to make a computer vision application that could detect my hands in real-time. There were so many articles online that used the Tensorflow object detection API and Google Colab, but I still struggled a lot to actually get things working. The reason? Versions of libraries and code change!\nHere is an example of using this detector in a virtual theremin:\nThis article should guide you on what works right now (March 2021). I will assume you know basic Python skills, and enough know-how to look up what you don’t know from other tutorials! 😂\nThings we will be using:\nGoogle Colab\nTensorflow Object Detection API 2\nThe Egohands dataset: http://vision.soic.indiana.edu/projects/egohands/\nSteps:1. Set up environment2. Download and Install Tensorflow 2 Object Detection API3. Download dataset, generate tf_records4. Download model and edit config5. Train model and export to savedmodel format\nAcknowledgements:Big thanks to github users molyswu, datitran, and gilberttanner from whom I have taken some code and modified it slightly. Please do check out their tutorials as well.\nOpen up a new Google Colab notebook, and mount your Google drive. You don’t need to, but it’s super handy incase you disconnect from your session, or just want to come back to it again.\nfrom google.colab import drivedrive.mount('/content/drive')%cd /content/drive/MyDrive\nWe are now inside your Google Drive. (You may need to change that last %cd incase your drive mounts to a slightly different path).\nGoogle Colab will be using Tensorflow 2, but just in case, explicitly do this:\n%tensorflow_version 2.x\nThe first thing is to download and install Tensorflow 2 Object Detection API The simplest way is to first go into your root directory and then clone from git:\n%cd /content/drive/MyDrive!git clone https://github.com/tensorflow/models.git\nThen, compile the protos — there is no output by the way.(protoc should work as it is, because Google Colab already has it installed):\n%cd /content/drive/MyDrive/models/research!protoc object_detection/protos/*.proto --python_out=.\nNow install the actual API:\n!cp object_detection/packages/tf2/setup.py . !python -m pip install .\nTest your tensorflow object detection API 2 installation! Everything should be “OK”. (It’s ok if some of the tests are automatically skipped).\n#TEST IF YOU WANT!python object_detection/builders/model_builder_tf2_test.py\nI’ll describe at a top level what you need to do here, as this part hasn’t really changed much from other tutorials. In summary, we are going to download the Egohands dataset but only use a subset of the many many images there, since we are doing transfer learning.We will split them into a train directory and a test directory, and generate .xml files for each of them (containing the bounding box annotations for each image).\nI have created a general script which:\nDownloads the whole Egohands dataset and extracts it\nOnly keeps a small number (4) of folders from it\nSplits the images into a train and test set\nCreates annotation .csv files with bounding box coordinates\nFirst we need to make sure you are in your root directory and then clone my git repo.\n%cd /content/drive/MyDrive!git clone https://github.com/aalpatya/detect_hands.git\nFrom my downloaded repo, copy the egohands_dataset_to_csv.py file into your root location and run it. This will do everything for you — by default it will only take 4 random folders (so 400 images) from the actual Egohands dataset, split them into a train and test set, and generate .csv files.\n!cp detect_hands/egohands_dataset_to_csv.py .!python egohands_dataset_to_csv.py\nBig shoutout to https://github.com/molyswu/hand_detection, from whom I got the original script. I’ve just tidied it up a bit and made a few tweaks. 😁\nThe test_labels.csv and train_labels.csv files that were just created contain the bounding box locations for every image, but surprise surprise, Tensorflow needs that information in a different format, a tf_record.\nWe will create the required files train.record and test.record by using generate_tfrecord.py from my git repo, (I modified this from the wonderful datitran’s tutorial at https://github.com/datitran/raccoon_dataset).\n%cd /content/drive/MyDrive!cp detect_hands/generate_tfrecord.py .# For the train dataset!python generate_tfrecord.py --csv_input=images/train/train_labels.csv --output_path=train.record# For the test dataset!python generate_tfrecord.py --csv_input=images/test/test_labels.csv --output_path=test.record\n/content/drive/MyDrive (or whatever your root is called) |__ egohands |__ detect_hands |__ images |__ train |__ |__ train.csv |__ test |__ |__ test.csv |__ train.record |__ test.record\nGet the download link of a model you want from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md\nHere I am using SSD Mobilenet V2 fpnlite 320x320. I found that some models did not work with tensorflow 2, so use this one if you want to be sure.\n%cd /content/drive/MyDrive!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz# Unzip!tar -xzvf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz\nFirst, create a file called label_map.pbtxt which contains your hand class. It should look like this:\nitem { id: 1 name: 'hand'}\nOr, you can just note down the path to the one already in my git repo which you should already have: /content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/label_map.pbtxt\nNext we will edit the pipeline.config that came with the downloaded tensorflow model. It will be inside the model directory of the model you downloaded from the tensorflow model zoo. For example: ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config\nNear the start of pipeline.config:\nChange the number of classes to 1:\nTowards the middle/end of pipeline.config:\nSet the path to the model checkpoint. We only want the beginnig part of the checkpoint name until the number. For example: “ckpt-0”, and not “ckpt-0.index”.\nSet the checkpoint type as “detection”\nYou may also want to change the batch size. In general the lower the batch size the quicker your model loss will drop, but it will take longer to settle at a loss value. I chose a batch size of 4 because I just wanted training to happen faster, I’m not looking for state of the art accuracy here. Play around with this number, and read this article.\nAt the end of pipeline.config:\nSet the path to label_map.pbtxt (there are two places to do this, one for testing and one for training)\nSet the path to the train.record and test.record files\nFirst we’re going to load up the tensorboard, so that once training begins we can visualise the progress in nice graphs.\nThe logdir argument is the path to the log directory that your training process will create. In our case this will be called output_training, and the logs automatically get stored in output_training/train.\n%load_ext tensorboard%tensorboard --logdir=/content/drive/MyDrive/output_training/train\nNow begin training, setting up the right paths to our pipeline config file, as well as the path to the output_training directory (which hasn’t been created yet).\n%cd /content/drive/MyDrive/models/research/object_detection/#train !python model_main_tf2.py \\--pipeline_config_path=/content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/pipeline.config \\--model_dir=/content/drive/MyDrive/output_training --alsologtostderr\nThis will begin the process of training, and you just sit back and wait. Either you wait a long time until the training process finishes, or just cancel the process after some time (perhaps you see on the loss graph that the loss is levelling off). It’s ok to do that, because the training process keeps saving model checkpoints.\nNow we will export the training output into a savedmodel format so that we can use it for inference.\n%cd /content/drive/MyDrive/models/research/object_detection!python exporter_main_v2.py \\--trained_checkpoint_dir=/content/drive/MyDrive/output_training \\--pipeline_config_path=/content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/pipeline.config \\--output_directory /content/drive/MyDrive/inference\nThe important bit of this whole thing is the inference folder. It is the only thing we actually need if we want to perform inference.\nCONGRATULATIONS! You have trained a hand detector! 🎈🎉🎊\nThe model can be loaded with tensorflow 2 as\ndetect_fn = tf.saved_model.load(PATH_TO_SAVED_MODEL)\nand from there you can use the detect_fn function and go ahead with inference, but I’ll leave that for another tutorial 😉"},"parsed":{"kind":"list like","value":[{"code":null,"e":469,"s":171,"text":"I wanted to make a computer vision application that could detect my hands in real-time. There were so many articles online that used the Tensorflow object detection API and Google Colab, but I still struggled a lot to actually get things working. The reason? Versions of libraries and code change!"},{"code":null,"e":534,"s":469,"text":"Here is an example of using this detector in a virtual theremin:"},{"code":null,"e":721,"s":534,"text":"This article should guide you on what works right now (March 2021). I will assume you know basic Python skills, and enough know-how to look up what you don’t know from other tutorials! 😂"},{"code":null,"e":746,"s":721,"text":"Things we will be using:"},{"code":null,"e":759,"s":746,"text":"Google Colab"},{"code":null,"e":793,"s":759,"text":"Tensorflow Object Detection API 2"},{"code":null,"e":865,"s":793,"text":"The Egohands dataset: http://vision.soic.indiana.edu/projects/egohands/"},{"code":null,"e":1069,"s":865,"text":"Steps:1. Set up environment2. Download and Install Tensorflow 2 Object Detection API3. Download dataset, generate tf_records4. Download model and edit config5. Train model and export to savedmodel format"},{"code":null,"e":1254,"s":1069,"text":"Acknowledgements:Big thanks to github users molyswu, datitran, and gilberttanner from whom I have taken some code and modified it slightly. Please do check out their tutorials as well."},{"code":null,"e":1440,"s":1254,"text":"Open up a new Google Colab notebook, and mount your Google drive. You don’t need to, but it’s super handy incase you disconnect from your session, or just want to come back to it again."},{"code":null,"e":1526,"s":1440,"text":"from google.colab import drivedrive.mount('/content/drive')%cd /content/drive/MyDrive"},{"code":null,"e":1657,"s":1526,"text":"We are now inside your Google Drive. (You may need to change that last %cd incase your drive mounts to a slightly different path)."},{"code":null,"e":1736,"s":1657,"text":"Google Colab will be using Tensorflow 2, but just in case, explicitly do this:"},{"code":null,"e":1760,"s":1736,"text":"%tensorflow_version 2.x"},{"code":null,"e":1919,"s":1760,"text":"The first thing is to download and install Tensorflow 2 Object Detection API The simplest way is to first go into your root directory and then clone from git:"},{"code":null,"e":1997,"s":1919,"text":"%cd /content/drive/MyDrive!git clone https://github.com/tensorflow/models.git"},{"code":null,"e":2132,"s":1997,"text":"Then, compile the protos — there is no output by the way.(protoc should work as it is, because Google Colab already has it installed):"},{"code":null,"e":2229,"s":2132,"text":"%cd /content/drive/MyDrive/models/research!protoc object_detection/protos/*.proto --python_out=."},{"code":null,"e":2257,"s":2229,"text":"Now install the actual API:"},{"code":null,"e":2327,"s":2257,"text":"!cp object_detection/packages/tf2/setup.py . !python -m pip install ."},{"code":null,"e":2470,"s":2327,"text":"Test your tensorflow object detection API 2 installation! Everything should be “OK”. (It’s ok if some of the tests are automatically skipped)."},{"code":null,"e":2547,"s":2470,"text":"#TEST IF YOU WANT!python object_detection/builders/model_builder_tf2_test.py"},{"code":null,"e":2975,"s":2547,"text":"I’ll describe at a top level what you need to do here, as this part hasn’t really changed much from other tutorials. In summary, we are going to download the Egohands dataset but only use a subset of the many many images there, since we are doing transfer learning.We will split them into a train directory and a test directory, and generate .xml files for each of them (containing the bounding box annotations for each image)."},{"code":null,"e":3014,"s":2975,"text":"I have created a general script which:"},{"code":null,"e":3067,"s":3014,"text":"Downloads the whole Egohands dataset and extracts it"},{"code":null,"e":3116,"s":3067,"text":"Only keeps a small number (4) of folders from it"},{"code":null,"e":3160,"s":3116,"text":"Splits the images into a train and test set"},{"code":null,"e":3220,"s":3160,"text":"Creates annotation .csv files with bounding box coordinates"},{"code":null,"e":3306,"s":3220,"text":"First we need to make sure you are in your root directory and then clone my git repo."},{"code":null,"e":3388,"s":3306,"text":"%cd /content/drive/MyDrive!git clone https://github.com/aalpatya/detect_hands.git"},{"code":null,"e":3683,"s":3388,"text":"From my downloaded repo, copy the egohands_dataset_to_csv.py file into your root location and run it. This will do everything for you — by default it will only take 4 random folders (so 400 images) from the actual Egohands dataset, split them into a train and test set, and generate .csv files."},{"code":null,"e":3763,"s":3683,"text":"!cp detect_hands/egohands_dataset_to_csv.py .!python egohands_dataset_to_csv.py"},{"code":null,"e":3913,"s":3763,"text":"Big shoutout to https://github.com/molyswu/hand_detection, from whom I got the original script. I’ve just tidied it up a bit and made a few tweaks. 😁"},{"code":null,"e":4128,"s":3913,"text":"The test_labels.csv and train_labels.csv files that were just created contain the bounding box locations for every image, but surprise surprise, Tensorflow needs that information in a different format, a tf_record."},{"code":null,"e":4344,"s":4128,"text":"We will create the required files train.record and test.record by using generate_tfrecord.py from my git repo, (I modified this from the wonderful datitran’s tutorial at https://github.com/datitran/raccoon_dataset)."},{"code":null,"e":4648,"s":4344,"text":"%cd /content/drive/MyDrive!cp detect_hands/generate_tfrecord.py .# For the train dataset!python generate_tfrecord.py --csv_input=images/train/train_labels.csv --output_path=train.record# For the test dataset!python generate_tfrecord.py --csv_input=images/test/test_labels.csv --output_path=test.record"},{"code":null,"e":4898,"s":4648,"text":"/content/drive/MyDrive (or whatever your root is called) |__ egohands |__ detect_hands |__ images |__ train |__ |__ train.csv |__ test |__ |__ test.csv |__ train.record |__ test.record"},{"code":null,"e":5047,"s":4898,"text":"Get the download link of a model you want from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md"},{"code":null,"e":5194,"s":5047,"text":"Here I am using SSD Mobilenet V2 fpnlite 320x320. I found that some models did not work with tensorflow 2, so use this one if you want to be sure."},{"code":null,"e":5417,"s":5194,"text":"%cd /content/drive/MyDrive!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz# Unzip!tar -xzvf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz"},{"code":null,"e":5519,"s":5417,"text":"First, create a file called label_map.pbtxt which contains your hand class. It should look like this:"},{"code":null,"e":5548,"s":5519,"text":"item { id: 1 name: 'hand'}"},{"code":null,"e":5737,"s":5548,"text":"Or, you can just note down the path to the one already in my git repo which you should already have: /content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/label_map.pbtxt"},{"code":null,"e":5995,"s":5737,"text":"Next we will edit the pipeline.config that came with the downloaded tensorflow model. It will be inside the model directory of the model you downloaded from the tensorflow model zoo. For example: ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config"},{"code":null,"e":6030,"s":5995,"text":"Near the start of pipeline.config:"},{"code":null,"e":6065,"s":6030,"text":"Change the number of classes to 1:"},{"code":null,"e":6108,"s":6065,"text":"Towards the middle/end of pipeline.config:"},{"code":null,"e":6265,"s":6108,"text":"Set the path to the model checkpoint. We only want the beginnig part of the checkpoint name until the number. For example: “ckpt-0”, and not “ckpt-0.index”."},{"code":null,"e":6304,"s":6265,"text":"Set the checkpoint type as “detection”"},{"code":null,"e":6654,"s":6304,"text":"You may also want to change the batch size. In general the lower the batch size the quicker your model loss will drop, but it will take longer to settle at a loss value. I chose a batch size of 4 because I just wanted training to happen faster, I’m not looking for state of the art accuracy here. Play around with this number, and read this article."},{"code":null,"e":6685,"s":6654,"text":"At the end of pipeline.config:"},{"code":null,"e":6789,"s":6685,"text":"Set the path to label_map.pbtxt (there are two places to do this, one for testing and one for training)"},{"code":null,"e":6844,"s":6789,"text":"Set the path to the train.record and test.record files"},{"code":null,"e":6965,"s":6844,"text":"First we’re going to load up the tensorboard, so that once training begins we can visualise the progress in nice graphs."},{"code":null,"e":7171,"s":6965,"text":"The logdir argument is the path to the log directory that your training process will create. In our case this will be called output_training, and the logs automatically get stored in output_training/train."},{"code":null,"e":7259,"s":7171,"text":"%load_ext tensorboard%tensorboard --logdir=/content/drive/MyDrive/output_training/train"},{"code":null,"e":7421,"s":7259,"text":"Now begin training, setting up the right paths to our pipeline config file, as well as the path to the output_training directory (which hasn’t been created yet)."},{"code":null,"e":7696,"s":7421,"text":"%cd /content/drive/MyDrive/models/research/object_detection/#train !python model_main_tf2.py \\--pipeline_config_path=/content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/pipeline.config \\--model_dir=/content/drive/MyDrive/output_training --alsologtostderr"},{"code":null,"e":8026,"s":7696,"text":"This will begin the process of training, and you just sit back and wait. Either you wait a long time until the training process finishes, or just cancel the process after some time (perhaps you see on the loss graph that the loss is levelling off). It’s ok to do that, because the training process keeps saving model checkpoints."},{"code":null,"e":8127,"s":8026,"text":"Now we will export the training output into a savedmodel format so that we can use it for inference."},{"code":null,"e":8444,"s":8127,"text":"%cd /content/drive/MyDrive/models/research/object_detection!python exporter_main_v2.py \\--trained_checkpoint_dir=/content/drive/MyDrive/output_training \\--pipeline_config_path=/content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/pipeline.config \\--output_directory /content/drive/MyDrive/inference"},{"code":null,"e":8578,"s":8444,"text":"The important bit of this whole thing is the inference folder. It is the only thing we actually need if we want to perform inference."},{"code":null,"e":8633,"s":8578,"text":"CONGRATULATIONS! You have trained a hand detector! 🎈🎉🎊"},{"code":null,"e":8678,"s":8633,"text":"The model can be loaded with tensorflow 2 as"},{"code":null,"e":8731,"s":8678,"text":"detect_fn = tf.saved_model.load(PATH_TO_SAVED_MODEL)"}],"string":"[\n {\n \"code\": null,\n \"e\": 469,\n \"s\": 171,\n \"text\": \"I wanted to make a computer vision application that could detect my hands in real-time. There were so many articles online that used the Tensorflow object detection API and Google Colab, but I still struggled a lot to actually get things working. The reason? Versions of libraries and code change!\"\n },\n {\n \"code\": null,\n \"e\": 534,\n \"s\": 469,\n \"text\": \"Here is an example of using this detector in a virtual theremin:\"\n },\n {\n \"code\": null,\n \"e\": 721,\n \"s\": 534,\n \"text\": \"This article should guide you on what works right now (March 2021). I will assume you know basic Python skills, and enough know-how to look up what you don’t know from other tutorials! 😂\"\n },\n {\n \"code\": null,\n \"e\": 746,\n \"s\": 721,\n \"text\": \"Things we will be using:\"\n },\n {\n \"code\": null,\n \"e\": 759,\n \"s\": 746,\n \"text\": \"Google Colab\"\n },\n {\n \"code\": null,\n \"e\": 793,\n \"s\": 759,\n \"text\": \"Tensorflow Object Detection API 2\"\n },\n {\n \"code\": null,\n \"e\": 865,\n \"s\": 793,\n \"text\": \"The Egohands dataset: http://vision.soic.indiana.edu/projects/egohands/\"\n },\n {\n \"code\": null,\n \"e\": 1069,\n \"s\": 865,\n \"text\": \"Steps:1. Set up environment2. Download and Install Tensorflow 2 Object Detection API3. Download dataset, generate tf_records4. Download model and edit config5. Train model and export to savedmodel format\"\n },\n {\n \"code\": null,\n \"e\": 1254,\n \"s\": 1069,\n \"text\": \"Acknowledgements:Big thanks to github users molyswu, datitran, and gilberttanner from whom I have taken some code and modified it slightly. Please do check out their tutorials as well.\"\n },\n {\n \"code\": null,\n \"e\": 1440,\n \"s\": 1254,\n \"text\": \"Open up a new Google Colab notebook, and mount your Google drive. You don’t need to, but it’s super handy incase you disconnect from your session, or just want to come back to it again.\"\n },\n {\n \"code\": null,\n \"e\": 1526,\n \"s\": 1440,\n \"text\": \"from google.colab import drivedrive.mount('/content/drive')%cd /content/drive/MyDrive\"\n },\n {\n \"code\": null,\n \"e\": 1657,\n \"s\": 1526,\n \"text\": \"We are now inside your Google Drive. (You may need to change that last %cd incase your drive mounts to a slightly different path).\"\n },\n {\n \"code\": null,\n \"e\": 1736,\n \"s\": 1657,\n \"text\": \"Google Colab will be using Tensorflow 2, but just in case, explicitly do this:\"\n },\n {\n \"code\": null,\n \"e\": 1760,\n \"s\": 1736,\n \"text\": \"%tensorflow_version 2.x\"\n },\n {\n \"code\": null,\n \"e\": 1919,\n \"s\": 1760,\n \"text\": \"The first thing is to download and install Tensorflow 2 Object Detection API The simplest way is to first go into your root directory and then clone from git:\"\n },\n {\n \"code\": null,\n \"e\": 1997,\n \"s\": 1919,\n \"text\": \"%cd /content/drive/MyDrive!git clone https://github.com/tensorflow/models.git\"\n },\n {\n \"code\": null,\n \"e\": 2132,\n \"s\": 1997,\n \"text\": \"Then, compile the protos — there is no output by the way.(protoc should work as it is, because Google Colab already has it installed):\"\n },\n {\n \"code\": null,\n \"e\": 2229,\n \"s\": 2132,\n \"text\": \"%cd /content/drive/MyDrive/models/research!protoc object_detection/protos/*.proto --python_out=.\"\n },\n {\n \"code\": null,\n \"e\": 2257,\n \"s\": 2229,\n \"text\": \"Now install the actual API:\"\n },\n {\n \"code\": null,\n \"e\": 2327,\n \"s\": 2257,\n \"text\": \"!cp object_detection/packages/tf2/setup.py . !python -m pip install .\"\n },\n {\n \"code\": null,\n \"e\": 2470,\n \"s\": 2327,\n \"text\": \"Test your tensorflow object detection API 2 installation! Everything should be “OK”. (It’s ok if some of the tests are automatically skipped).\"\n },\n {\n \"code\": null,\n \"e\": 2547,\n \"s\": 2470,\n \"text\": \"#TEST IF YOU WANT!python object_detection/builders/model_builder_tf2_test.py\"\n },\n {\n \"code\": null,\n \"e\": 2975,\n \"s\": 2547,\n \"text\": \"I’ll describe at a top level what you need to do here, as this part hasn’t really changed much from other tutorials. In summary, we are going to download the Egohands dataset but only use a subset of the many many images there, since we are doing transfer learning.We will split them into a train directory and a test directory, and generate .xml files for each of them (containing the bounding box annotations for each image).\"\n },\n {\n \"code\": null,\n \"e\": 3014,\n \"s\": 2975,\n \"text\": \"I have created a general script which:\"\n },\n {\n \"code\": null,\n \"e\": 3067,\n \"s\": 3014,\n \"text\": \"Downloads the whole Egohands dataset and extracts it\"\n },\n {\n \"code\": null,\n \"e\": 3116,\n \"s\": 3067,\n \"text\": \"Only keeps a small number (4) of folders from it\"\n },\n {\n \"code\": null,\n \"e\": 3160,\n \"s\": 3116,\n \"text\": \"Splits the images into a train and test set\"\n },\n {\n \"code\": null,\n \"e\": 3220,\n \"s\": 3160,\n \"text\": \"Creates annotation .csv files with bounding box coordinates\"\n },\n {\n \"code\": null,\n \"e\": 3306,\n \"s\": 3220,\n \"text\": \"First we need to make sure you are in your root directory and then clone my git repo.\"\n },\n {\n \"code\": null,\n \"e\": 3388,\n \"s\": 3306,\n \"text\": \"%cd /content/drive/MyDrive!git clone https://github.com/aalpatya/detect_hands.git\"\n },\n {\n \"code\": null,\n \"e\": 3683,\n \"s\": 3388,\n \"text\": \"From my downloaded repo, copy the egohands_dataset_to_csv.py file into your root location and run it. This will do everything for you — by default it will only take 4 random folders (so 400 images) from the actual Egohands dataset, split them into a train and test set, and generate .csv files.\"\n },\n {\n \"code\": null,\n \"e\": 3763,\n \"s\": 3683,\n \"text\": \"!cp detect_hands/egohands_dataset_to_csv.py .!python egohands_dataset_to_csv.py\"\n },\n {\n \"code\": null,\n \"e\": 3913,\n \"s\": 3763,\n \"text\": \"Big shoutout to https://github.com/molyswu/hand_detection, from whom I got the original script. I’ve just tidied it up a bit and made a few tweaks. 😁\"\n },\n {\n \"code\": null,\n \"e\": 4128,\n \"s\": 3913,\n \"text\": \"The test_labels.csv and train_labels.csv files that were just created contain the bounding box locations for every image, but surprise surprise, Tensorflow needs that information in a different format, a tf_record.\"\n },\n {\n \"code\": null,\n \"e\": 4344,\n \"s\": 4128,\n \"text\": \"We will create the required files train.record and test.record by using generate_tfrecord.py from my git repo, (I modified this from the wonderful datitran’s tutorial at https://github.com/datitran/raccoon_dataset).\"\n },\n {\n \"code\": null,\n \"e\": 4648,\n \"s\": 4344,\n \"text\": \"%cd /content/drive/MyDrive!cp detect_hands/generate_tfrecord.py .# For the train dataset!python generate_tfrecord.py --csv_input=images/train/train_labels.csv --output_path=train.record# For the test dataset!python generate_tfrecord.py --csv_input=images/test/test_labels.csv --output_path=test.record\"\n },\n {\n \"code\": null,\n \"e\": 4898,\n \"s\": 4648,\n \"text\": \"/content/drive/MyDrive (or whatever your root is called) |__ egohands |__ detect_hands |__ images |__ train |__ |__ train.csv |__ test |__ |__ test.csv |__ train.record |__ test.record\"\n },\n {\n \"code\": null,\n \"e\": 5047,\n \"s\": 4898,\n \"text\": \"Get the download link of a model you want from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md\"\n },\n {\n \"code\": null,\n \"e\": 5194,\n \"s\": 5047,\n \"text\": \"Here I am using SSD Mobilenet V2 fpnlite 320x320. I found that some models did not work with tensorflow 2, so use this one if you want to be sure.\"\n },\n {\n \"code\": null,\n \"e\": 5417,\n \"s\": 5194,\n \"text\": \"%cd /content/drive/MyDrive!wget http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz# Unzip!tar -xzvf ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.tar.gz\"\n },\n {\n \"code\": null,\n \"e\": 5519,\n \"s\": 5417,\n \"text\": \"First, create a file called label_map.pbtxt which contains your hand class. It should look like this:\"\n },\n {\n \"code\": null,\n \"e\": 5548,\n \"s\": 5519,\n \"text\": \"item { id: 1 name: 'hand'}\"\n },\n {\n \"code\": null,\n \"e\": 5737,\n \"s\": 5548,\n \"text\": \"Or, you can just note down the path to the one already in my git repo which you should already have: /content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/label_map.pbtxt\"\n },\n {\n \"code\": null,\n \"e\": 5995,\n \"s\": 5737,\n \"text\": \"Next we will edit the pipeline.config that came with the downloaded tensorflow model. It will be inside the model directory of the model you downloaded from the tensorflow model zoo. For example: ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8/pipeline.config\"\n },\n {\n \"code\": null,\n \"e\": 6030,\n \"s\": 5995,\n \"text\": \"Near the start of pipeline.config:\"\n },\n {\n \"code\": null,\n \"e\": 6065,\n \"s\": 6030,\n \"text\": \"Change the number of classes to 1:\"\n },\n {\n \"code\": null,\n \"e\": 6108,\n \"s\": 6065,\n \"text\": \"Towards the middle/end of pipeline.config:\"\n },\n {\n \"code\": null,\n \"e\": 6265,\n \"s\": 6108,\n \"text\": \"Set the path to the model checkpoint. We only want the beginnig part of the checkpoint name until the number. For example: “ckpt-0”, and not “ckpt-0.index”.\"\n },\n {\n \"code\": null,\n \"e\": 6304,\n \"s\": 6265,\n \"text\": \"Set the checkpoint type as “detection”\"\n },\n {\n \"code\": null,\n \"e\": 6654,\n \"s\": 6304,\n \"text\": \"You may also want to change the batch size. In general the lower the batch size the quicker your model loss will drop, but it will take longer to settle at a loss value. I chose a batch size of 4 because I just wanted training to happen faster, I’m not looking for state of the art accuracy here. Play around with this number, and read this article.\"\n },\n {\n \"code\": null,\n \"e\": 6685,\n \"s\": 6654,\n \"text\": \"At the end of pipeline.config:\"\n },\n {\n \"code\": null,\n \"e\": 6789,\n \"s\": 6685,\n \"text\": \"Set the path to label_map.pbtxt (there are two places to do this, one for testing and one for training)\"\n },\n {\n \"code\": null,\n \"e\": 6844,\n \"s\": 6789,\n \"text\": \"Set the path to the train.record and test.record files\"\n },\n {\n \"code\": null,\n \"e\": 6965,\n \"s\": 6844,\n \"text\": \"First we’re going to load up the tensorboard, so that once training begins we can visualise the progress in nice graphs.\"\n },\n {\n \"code\": null,\n \"e\": 7171,\n \"s\": 6965,\n \"text\": \"The logdir argument is the path to the log directory that your training process will create. In our case this will be called output_training, and the logs automatically get stored in output_training/train.\"\n },\n {\n \"code\": null,\n \"e\": 7259,\n \"s\": 7171,\n \"text\": \"%load_ext tensorboard%tensorboard --logdir=/content/drive/MyDrive/output_training/train\"\n },\n {\n \"code\": null,\n \"e\": 7421,\n \"s\": 7259,\n \"text\": \"Now begin training, setting up the right paths to our pipeline config file, as well as the path to the output_training directory (which hasn’t been created yet).\"\n },\n {\n \"code\": null,\n \"e\": 7696,\n \"s\": 7421,\n \"text\": \"%cd /content/drive/MyDrive/models/research/object_detection/#train !python model_main_tf2.py \\\\--pipeline_config_path=/content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/pipeline.config \\\\--model_dir=/content/drive/MyDrive/output_training --alsologtostderr\"\n },\n {\n \"code\": null,\n \"e\": 8026,\n \"s\": 7696,\n \"text\": \"This will begin the process of training, and you just sit back and wait. Either you wait a long time until the training process finishes, or just cancel the process after some time (perhaps you see on the loss graph that the loss is levelling off). It’s ok to do that, because the training process keeps saving model checkpoints.\"\n },\n {\n \"code\": null,\n \"e\": 8127,\n \"s\": 8026,\n \"text\": \"Now we will export the training output into a savedmodel format so that we can use it for inference.\"\n },\n {\n \"code\": null,\n \"e\": 8444,\n \"s\": 8127,\n \"text\": \"%cd /content/drive/MyDrive/models/research/object_detection!python exporter_main_v2.py \\\\--trained_checkpoint_dir=/content/drive/MyDrive/output_training \\\\--pipeline_config_path=/content/drive/MyDrive/detect_hands/model_data/ssd_mobilenet_v2_fpn_320/pipeline.config \\\\--output_directory /content/drive/MyDrive/inference\"\n },\n {\n \"code\": null,\n \"e\": 8578,\n \"s\": 8444,\n \"text\": \"The important bit of this whole thing is the inference folder. It is the only thing we actually need if we want to perform inference.\"\n },\n {\n \"code\": null,\n \"e\": 8633,\n \"s\": 8578,\n \"text\": \"CONGRATULATIONS! You have trained a hand detector! 🎈🎉🎊\"\n },\n {\n \"code\": null,\n \"e\": 8678,\n \"s\": 8633,\n \"text\": \"The model can be loaded with tensorflow 2 as\"\n },\n {\n \"code\": null,\n \"e\": 8731,\n \"s\": 8678,\n \"text\": \"detect_fn = tf.saved_model.load(PATH_TO_SAVED_MODEL)\"\n }\n]"}}},{"rowIdx":559,"cells":{"title":{"kind":"string","value":"jMeter - Webservice Test Plan"},"text":{"kind":"string","value":"In this chapter, we will learn how to create a Test Plan to test a WebService. For our test purpose, we have created a simple webservice project and deployed it on the Tomcat server locally.\nTo create a webservice project, we have used Eclipse IDE. First write the Service Endpoint Interface HelloWorld under the package com.tutorialspoint.ws. The contents of the HelloWorld.java are as follows −\npackage com.tutorialspoint.ws;\n\nimport javax.jws.WebMethod;\nimport javax.jws.WebService;\nimport javax.jws.soap.SOAPBinding;\nimport javax.jws.soap.SOAPBinding.Style;\n\n//Service Endpoint Interface\n@WebService\n@SOAPBinding(style = Style.RPC)\n\npublic interface HelloWorld {\n @WebMethod String getHelloWorldMessage(String string);\n}\nThis service has a method getHelloWorldMessage which takes a String parameter.\nNext, create the implementation class HelloWorldImpl.java under the package com.tutorialspoint.ws.\npackage com.tutorialspoint.ws;\n\nimport javax.jws.WebService;\n\n@WebService(endpointInterface=\"com.tutorialspoint.ws.HelloWorld\")\npublic class HelloWorldImpl implements HelloWorld {\n @Override\n public String getHelloWorldMessage(String myName) {\n return(\"Hello \"+myName+\" to JAX WS world\");\n }\n}\nLet us now publish this web service locally by creating the Endpoint publisher and expose the service on the server.\nThe publish method takes two parameters −\nEndpoint URL String.\nEndpoint URL String.\nImplementor object, in this case the HelloWorld implementation class, which is exposed as a Web Service at the endpoint identified by the URL mentioned in the parameter above.\nImplementor object, in this case the HelloWorld implementation class, which is exposed as a Web Service at the endpoint identified by the URL mentioned in the parameter above.\nThe contents of HelloWorldPublisher.java are as follows −\npackage com.tutorialspoint.endpoint;\n\nimport javax.xml.ws.Endpoint;\nimport com.tutorialspoint.ws.HelloWorldImpl;\n\npublic class HelloWorldPublisher {\n public static void main(String[] args) {\n Endpoint.publish(\"http://localhost:9000/ws/hello\", new HelloWorldImpl());\n }\n}\nModify the web.xml contents as shown below −\n\n\n\n\n \n \n com.sun.xml.ws.transport.http.servlet.WSServletContextListener\n \n \n\t\n \n hello\n com.sun.xml.ws.transport.http.servlet.WSServlet\n 1\n \n\t\n \n hello\n /hello\n \n\t\n \n 120\n \n\t\n\nTo deploy this application as a webservice, we would need another configuration file sun-jaxws.xml. The contents of this file are as follows −\n\n\n \n \n\nNow that all the files are ready, the directory structure would look as shown in the following screenshot −\nNow create a WAR file of this application.\nNow create a WAR file of this application.\nChoose the project → right click → Export → WAR file.\nChoose the project → right click → Export → WAR file.\nSave this as hello.war file under the webapps folder of Tomcat server.\nSave this as hello.war file under the webapps folder of Tomcat server.\nNow start the Tomcat server.\nNow start the Tomcat server.\nOnce the server is started, you should be able to access the webservice with the URL − http://localhost:8080/hello/hello\nOnce the server is started, you should be able to access the webservice with the URL − http://localhost:8080/hello/hello\nNow let us create a test plan to test the above webservice.\nOpen the JMeter window by clicking /home/manisha/apache-jmeter2.9/bin/jmeter.sh.\nOpen the JMeter window by clicking /home/manisha/apache-jmeter2.9/bin/jmeter.sh.\nClick the Test Plan node.\nClick the Test Plan node.\nRename this Test Plan node as WebserviceTest.\nRename this Test Plan node as WebserviceTest.\nAdd one Thread Group, which is placeholder for all other elements like Samplers, Controllers, and Listeners.\nRight click on WebserviceTest (our Test Plan) → Add → Threads (Users) → Thread Group. Thread Group will get added under the Test Plan (WebserviceTest) node.\nRight click on WebserviceTest (our Test Plan) → Add → Threads (Users) → Thread Group. Thread Group will get added under the Test Plan (WebserviceTest) node.\nNext, let us modify the default properties of the Thread Group to suit our testing. Following properties are changed −\n\nName − webservice user\nNumber of Threads (Users) − 2\nRamp-Up Period − leave the the default value of 0 seconds.\nLoop Count − 2\n\n\nNext, let us modify the default properties of the Thread Group to suit our testing. Following properties are changed −\nName − webservice user\nName − webservice user\nNumber of Threads (Users) − 2\nNumber of Threads (Users) − 2\nRamp-Up Period − leave the the default value of 0 seconds.\nRamp-Up Period − leave the the default value of 0 seconds.\nLoop Count − 2\nLoop Count − 2\nNow that we have defined the users, it is time to define the tasks that they will be performing.\nWe will add SOAP/XML-RPC Request element −\nRight-click mouse button to get the Add menu.\nRight-click mouse button to get the Add menu.\nSelect Add → Sampler → SOAP/XML-RPC Request.\nSelect Add → Sampler → SOAP/XML-RPC Request.\nSelect the SOAP/XML-RPC Request element in the tree\nSelect the SOAP/XML-RPC Request element in the tree\nEdit the following properties as in the image below −\nEdit the following properties as in the image below −\nThe following details are entered in this element −\n\nName − SOAP/XML-RPC Request\nURL − http://localhost:8080/hello/hello?wsdl\nSoap/XML-RPC Data − Enter the below contents\n\n\nThe following details are entered in this element −\nName − SOAP/XML-RPC Request\nName − SOAP/XML-RPC Request\nURL − http://localhost:8080/hello/hello?wsdl\nURL − http://localhost:8080/hello/hello?wsdl\nSoap/XML-RPC Data − Enter the below contents\nSoap/XML-RPC Data − Enter the below contents\n\n \n\t\n \n \n Manisha\n \n \n \n\nThe final element you need to add to your Test Plan is a Listener. This element is responsible for storing all of the results of your HTTP requests in a file and presenting a visual model of the data.\nSelect the webservice user element.\nSelect the webservice user element.\nAdd a View Results Tree listener by selecting Add → Listener → View Results Tree.\nAdd a View Results Tree listener by selecting Add → Listener → View Results Tree.\nNow save the above test plan as test_webservice.jmx. Execute this test plan using Run → Start option.\nThe following output can be seen in the listener.\nIn the last image, you can see the response message \"Hello Manisha to JAX WS world\".\n\n 59 Lectures \n 9.5 hours \n\n Rahul Shetty\n\n 54 Lectures \n 13.5 hours \n\n Wallace Tauriac\n\n 23 Lectures \n 1.5 hours \n\n Anuja Jain\n\n 12 Lectures \n 1 hours \n\n Spotle Learn\n Print\n Add Notes\n Bookmark this page"},"parsed":{"kind":"list like","value":[{"code":null,"e":2096,"s":1905,"text":"In this chapter, we will learn how to create a Test Plan to test a WebService. For our test purpose, we have created a simple webservice project and deployed it on the Tomcat server locally."},{"code":null,"e":2302,"s":2096,"text":"To create a webservice project, we have used Eclipse IDE. First write the Service Endpoint Interface HelloWorld under the package com.tutorialspoint.ws. The contents of the HelloWorld.java are as follows −"},{"code":null,"e":2632,"s":2302,"text":"package com.tutorialspoint.ws;\n\nimport javax.jws.WebMethod;\nimport javax.jws.WebService;\nimport javax.jws.soap.SOAPBinding;\nimport javax.jws.soap.SOAPBinding.Style;\n\n//Service Endpoint Interface\n@WebService\n@SOAPBinding(style = Style.RPC)\n\npublic interface HelloWorld {\n @WebMethod String getHelloWorldMessage(String string);\n}"},{"code":null,"e":2711,"s":2632,"text":"This service has a method getHelloWorldMessage which takes a String parameter."},{"code":null,"e":2810,"s":2711,"text":"Next, create the implementation class HelloWorldImpl.java under the package com.tutorialspoint.ws."},{"code":null,"e":3117,"s":2810,"text":"package com.tutorialspoint.ws;\n\nimport javax.jws.WebService;\n\n@WebService(endpointInterface=\"com.tutorialspoint.ws.HelloWorld\")\npublic class HelloWorldImpl implements HelloWorld {\n @Override\n public String getHelloWorldMessage(String myName) {\n return(\"Hello \"+myName+\" to JAX WS world\");\n }\n}"},{"code":null,"e":3234,"s":3117,"text":"Let us now publish this web service locally by creating the Endpoint publisher and expose the service on the server."},{"code":null,"e":3276,"s":3234,"text":"The publish method takes two parameters −"},{"code":null,"e":3297,"s":3276,"text":"Endpoint URL String."},{"code":null,"e":3318,"s":3297,"text":"Endpoint URL String."},{"code":null,"e":3494,"s":3318,"text":"Implementor object, in this case the HelloWorld implementation class, which is exposed as a Web Service at the endpoint identified by the URL mentioned in the parameter above."},{"code":null,"e":3670,"s":3494,"text":"Implementor object, in this case the HelloWorld implementation class, which is exposed as a Web Service at the endpoint identified by the URL mentioned in the parameter above."},{"code":null,"e":3728,"s":3670,"text":"The contents of HelloWorldPublisher.java are as follows −"},{"code":null,"e":4008,"s":3728,"text":"package com.tutorialspoint.endpoint;\n\nimport javax.xml.ws.Endpoint;\nimport com.tutorialspoint.ws.HelloWorldImpl;\n\npublic class HelloWorldPublisher {\n public static void main(String[] args) {\n Endpoint.publish(\"http://localhost:9000/ws/hello\", new HelloWorldImpl());\n }\n}"},{"code":null,"e":4053,"s":4008,"text":"Modify the web.xml contents as shown below −"},{"code":null,"e":4815,"s":4053,"text":"\n\n\n\n \n \n com.sun.xml.ws.transport.http.servlet.WSServletContextListener\n \n \n\t\n \n hello\n com.sun.xml.ws.transport.http.servlet.WSServlet\n 1\n \n\t\n \n hello\n /hello\n \n\t\n \n 120\n \n\t\n"},{"code":null,"e":4958,"s":4815,"text":"To deploy this application as a webservice, we would need another configuration file sun-jaxws.xml. The contents of this file are as follows −"},{"code":null,"e":5235,"s":4958,"text":"\n\n \n \n"},{"code":null,"e":5343,"s":5235,"text":"Now that all the files are ready, the directory structure would look as shown in the following screenshot −"},{"code":null,"e":5386,"s":5343,"text":"Now create a WAR file of this application."},{"code":null,"e":5429,"s":5386,"text":"Now create a WAR file of this application."},{"code":null,"e":5483,"s":5429,"text":"Choose the project → right click → Export → WAR file."},{"code":null,"e":5537,"s":5483,"text":"Choose the project → right click → Export → WAR file."},{"code":null,"e":5608,"s":5537,"text":"Save this as hello.war file under the webapps folder of Tomcat server."},{"code":null,"e":5679,"s":5608,"text":"Save this as hello.war file under the webapps folder of Tomcat server."},{"code":null,"e":5708,"s":5679,"text":"Now start the Tomcat server."},{"code":null,"e":5737,"s":5708,"text":"Now start the Tomcat server."},{"code":null,"e":5858,"s":5737,"text":"Once the server is started, you should be able to access the webservice with the URL − http://localhost:8080/hello/hello"},{"code":null,"e":5979,"s":5858,"text":"Once the server is started, you should be able to access the webservice with the URL − http://localhost:8080/hello/hello"},{"code":null,"e":6039,"s":5979,"text":"Now let us create a test plan to test the above webservice."},{"code":null,"e":6120,"s":6039,"text":"Open the JMeter window by clicking /home/manisha/apache-jmeter2.9/bin/jmeter.sh."},{"code":null,"e":6201,"s":6120,"text":"Open the JMeter window by clicking /home/manisha/apache-jmeter2.9/bin/jmeter.sh."},{"code":null,"e":6227,"s":6201,"text":"Click the Test Plan node."},{"code":null,"e":6253,"s":6227,"text":"Click the Test Plan node."},{"code":null,"e":6299,"s":6253,"text":"Rename this Test Plan node as WebserviceTest."},{"code":null,"e":6345,"s":6299,"text":"Rename this Test Plan node as WebserviceTest."},{"code":null,"e":6454,"s":6345,"text":"Add one Thread Group, which is placeholder for all other elements like Samplers, Controllers, and Listeners."},{"code":null,"e":6611,"s":6454,"text":"Right click on WebserviceTest (our Test Plan) → Add → Threads (Users) → Thread Group. Thread Group will get added under the Test Plan (WebserviceTest) node."},{"code":null,"e":6768,"s":6611,"text":"Right click on WebserviceTest (our Test Plan) → Add → Threads (Users) → Thread Group. Thread Group will get added under the Test Plan (WebserviceTest) node."},{"code":null,"e":7017,"s":6768,"text":"Next, let us modify the default properties of the Thread Group to suit our testing. Following properties are changed −\n\nName − webservice user\nNumber of Threads (Users) − 2\nRamp-Up Period − leave the the default value of 0 seconds.\nLoop Count − 2\n\n"},{"code":null,"e":7136,"s":7017,"text":"Next, let us modify the default properties of the Thread Group to suit our testing. Following properties are changed −"},{"code":null,"e":7159,"s":7136,"text":"Name − webservice user"},{"code":null,"e":7182,"s":7159,"text":"Name − webservice user"},{"code":null,"e":7212,"s":7182,"text":"Number of Threads (Users) − 2"},{"code":null,"e":7242,"s":7212,"text":"Number of Threads (Users) − 2"},{"code":null,"e":7301,"s":7242,"text":"Ramp-Up Period − leave the the default value of 0 seconds."},{"code":null,"e":7360,"s":7301,"text":"Ramp-Up Period − leave the the default value of 0 seconds."},{"code":null,"e":7375,"s":7360,"text":"Loop Count − 2"},{"code":null,"e":7390,"s":7375,"text":"Loop Count − 2"},{"code":null,"e":7487,"s":7390,"text":"Now that we have defined the users, it is time to define the tasks that they will be performing."},{"code":null,"e":7530,"s":7487,"text":"We will add SOAP/XML-RPC Request element −"},{"code":null,"e":7576,"s":7530,"text":"Right-click mouse button to get the Add menu."},{"code":null,"e":7622,"s":7576,"text":"Right-click mouse button to get the Add menu."},{"code":null,"e":7667,"s":7622,"text":"Select Add → Sampler → SOAP/XML-RPC Request."},{"code":null,"e":7712,"s":7667,"text":"Select Add → Sampler → SOAP/XML-RPC Request."},{"code":null,"e":7764,"s":7712,"text":"Select the SOAP/XML-RPC Request element in the tree"},{"code":null,"e":7816,"s":7764,"text":"Select the SOAP/XML-RPC Request element in the tree"},{"code":null,"e":7870,"s":7816,"text":"Edit the following properties as in the image below −"},{"code":null,"e":7924,"s":7870,"text":"Edit the following properties as in the image below −"},{"code":null,"e":8097,"s":7924,"text":"The following details are entered in this element −\n\nName − SOAP/XML-RPC Request\nURL − http://localhost:8080/hello/hello?wsdl\nSoap/XML-RPC Data − Enter the below contents\n\n"},{"code":null,"e":8149,"s":8097,"text":"The following details are entered in this element −"},{"code":null,"e":8177,"s":8149,"text":"Name − SOAP/XML-RPC Request"},{"code":null,"e":8205,"s":8177,"text":"Name − SOAP/XML-RPC Request"},{"code":null,"e":8250,"s":8205,"text":"URL − http://localhost:8080/hello/hello?wsdl"},{"code":null,"e":8295,"s":8250,"text":"URL − http://localhost:8080/hello/hello?wsdl"},{"code":null,"e":8340,"s":8295,"text":"Soap/XML-RPC Data − Enter the below contents"},{"code":null,"e":8385,"s":8340,"text":"Soap/XML-RPC Data − Enter the below contents"},{"code":null,"e":8693,"s":8385,"text":"\n \n\t\n \n \n Manisha\n \n \n \n"},{"code":null,"e":8894,"s":8693,"text":"The final element you need to add to your Test Plan is a Listener. This element is responsible for storing all of the results of your HTTP requests in a file and presenting a visual model of the data."},{"code":null,"e":8930,"s":8894,"text":"Select the webservice user element."},{"code":null,"e":8966,"s":8930,"text":"Select the webservice user element."},{"code":null,"e":9048,"s":8966,"text":"Add a View Results Tree listener by selecting Add → Listener → View Results Tree."},{"code":null,"e":9130,"s":9048,"text":"Add a View Results Tree listener by selecting Add → Listener → View Results Tree."},{"code":null,"e":9232,"s":9130,"text":"Now save the above test plan as test_webservice.jmx. Execute this test plan using Run → Start option."},{"code":null,"e":9282,"s":9232,"text":"The following output can be seen in the listener."},{"code":null,"e":9367,"s":9282,"text":"In the last image, you can see the response message \"Hello Manisha to JAX WS world\"."},{"code":null,"e":9402,"s":9367,"text":"\n 59 Lectures \n 9.5 hours \n"},{"code":null,"e":9416,"s":9402,"text":" Rahul Shetty"},{"code":null,"e":9452,"s":9416,"text":"\n 54 Lectures \n 13.5 hours \n"},{"code":null,"e":9469,"s":9452,"text":" Wallace Tauriac"},{"code":null,"e":9504,"s":9469,"text":"\n 23 Lectures \n 1.5 hours \n"},{"code":null,"e":9516,"s":9504,"text":" Anuja Jain"},{"code":null,"e":9549,"s":9516,"text":"\n 12 Lectures \n 1 hours \n"},{"code":null,"e":9563,"s":9549,"text":" Spotle Learn"},{"code":null,"e":9570,"s":9563,"text":" Print"},{"code":null,"e":9581,"s":9570,"text":" Add Notes"}],"string":"[\n {\n \"code\": null,\n \"e\": 2096,\n \"s\": 1905,\n \"text\": \"In this chapter, we will learn how to create a Test Plan to test a WebService. For our test purpose, we have created a simple webservice project and deployed it on the Tomcat server locally.\"\n },\n {\n \"code\": null,\n \"e\": 2302,\n \"s\": 2096,\n \"text\": \"To create a webservice project, we have used Eclipse IDE. First write the Service Endpoint Interface HelloWorld under the package com.tutorialspoint.ws. The contents of the HelloWorld.java are as follows −\"\n },\n {\n \"code\": null,\n \"e\": 2632,\n \"s\": 2302,\n \"text\": \"package com.tutorialspoint.ws;\\n\\nimport javax.jws.WebMethod;\\nimport javax.jws.WebService;\\nimport javax.jws.soap.SOAPBinding;\\nimport javax.jws.soap.SOAPBinding.Style;\\n\\n//Service Endpoint Interface\\n@WebService\\n@SOAPBinding(style = Style.RPC)\\n\\npublic interface HelloWorld {\\n @WebMethod String getHelloWorldMessage(String string);\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2711,\n \"s\": 2632,\n \"text\": \"This service has a method getHelloWorldMessage which takes a String parameter.\"\n },\n {\n \"code\": null,\n \"e\": 2810,\n \"s\": 2711,\n \"text\": \"Next, create the implementation class HelloWorldImpl.java under the package com.tutorialspoint.ws.\"\n },\n {\n \"code\": null,\n \"e\": 3117,\n \"s\": 2810,\n \"text\": \"package com.tutorialspoint.ws;\\n\\nimport javax.jws.WebService;\\n\\n@WebService(endpointInterface=\\\"com.tutorialspoint.ws.HelloWorld\\\")\\npublic class HelloWorldImpl implements HelloWorld {\\n @Override\\n public String getHelloWorldMessage(String myName) {\\n return(\\\"Hello \\\"+myName+\\\" to JAX WS world\\\");\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 3234,\n \"s\": 3117,\n \"text\": \"Let us now publish this web service locally by creating the Endpoint publisher and expose the service on the server.\"\n },\n {\n \"code\": null,\n \"e\": 3276,\n \"s\": 3234,\n \"text\": \"The publish method takes two parameters −\"\n },\n {\n \"code\": null,\n \"e\": 3297,\n \"s\": 3276,\n \"text\": \"Endpoint URL String.\"\n },\n {\n \"code\": null,\n \"e\": 3318,\n \"s\": 3297,\n \"text\": \"Endpoint URL String.\"\n },\n {\n \"code\": null,\n \"e\": 3494,\n \"s\": 3318,\n \"text\": \"Implementor object, in this case the HelloWorld implementation class, which is exposed as a Web Service at the endpoint identified by the URL mentioned in the parameter above.\"\n },\n {\n \"code\": null,\n \"e\": 3670,\n \"s\": 3494,\n \"text\": \"Implementor object, in this case the HelloWorld implementation class, which is exposed as a Web Service at the endpoint identified by the URL mentioned in the parameter above.\"\n },\n {\n \"code\": null,\n \"e\": 3728,\n \"s\": 3670,\n \"text\": \"The contents of HelloWorldPublisher.java are as follows −\"\n },\n {\n \"code\": null,\n \"e\": 4008,\n \"s\": 3728,\n \"text\": \"package com.tutorialspoint.endpoint;\\n\\nimport javax.xml.ws.Endpoint;\\nimport com.tutorialspoint.ws.HelloWorldImpl;\\n\\npublic class HelloWorldPublisher {\\n public static void main(String[] args) {\\n Endpoint.publish(\\\"http://localhost:9000/ws/hello\\\", new HelloWorldImpl());\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 4053,\n \"s\": 4008,\n \"text\": \"Modify the web.xml contents as shown below −\"\n },\n {\n \"code\": null,\n \"e\": 4815,\n \"s\": 4053,\n \"text\": \"\\n\\n\\n\\n \\n \\n com.sun.xml.ws.transport.http.servlet.WSServletContextListener\\n \\n \\n\\t\\n \\n hello\\n com.sun.xml.ws.transport.http.servlet.WSServlet\\n 1\\n \\n\\t\\n \\n hello\\n /hello\\n \\n\\t\\n \\n 120\\n \\n\\t\\n\"\n },\n {\n \"code\": null,\n \"e\": 4958,\n \"s\": 4815,\n \"text\": \"To deploy this application as a webservice, we would need another configuration file sun-jaxws.xml. The contents of this file are as follows −\"\n },\n {\n \"code\": null,\n \"e\": 5235,\n \"s\": 4958,\n \"text\": \"\\n\\n \\n \\n\"\n },\n {\n \"code\": null,\n \"e\": 5343,\n \"s\": 5235,\n \"text\": \"Now that all the files are ready, the directory structure would look as shown in the following screenshot −\"\n },\n {\n \"code\": null,\n \"e\": 5386,\n \"s\": 5343,\n \"text\": \"Now create a WAR file of this application.\"\n },\n {\n \"code\": null,\n \"e\": 5429,\n \"s\": 5386,\n \"text\": \"Now create a WAR file of this application.\"\n },\n {\n \"code\": null,\n \"e\": 5483,\n \"s\": 5429,\n \"text\": \"Choose the project → right click → Export → WAR file.\"\n },\n {\n \"code\": null,\n \"e\": 5537,\n \"s\": 5483,\n \"text\": \"Choose the project → right click → Export → WAR file.\"\n },\n {\n \"code\": null,\n \"e\": 5608,\n \"s\": 5537,\n \"text\": \"Save this as hello.war file under the webapps folder of Tomcat server.\"\n },\n {\n \"code\": null,\n \"e\": 5679,\n \"s\": 5608,\n \"text\": \"Save this as hello.war file under the webapps folder of Tomcat server.\"\n },\n {\n \"code\": null,\n \"e\": 5708,\n \"s\": 5679,\n \"text\": \"Now start the Tomcat server.\"\n },\n {\n \"code\": null,\n \"e\": 5737,\n \"s\": 5708,\n \"text\": \"Now start the Tomcat server.\"\n },\n {\n \"code\": null,\n \"e\": 5858,\n \"s\": 5737,\n \"text\": \"Once the server is started, you should be able to access the webservice with the URL − http://localhost:8080/hello/hello\"\n },\n {\n \"code\": null,\n \"e\": 5979,\n \"s\": 5858,\n \"text\": \"Once the server is started, you should be able to access the webservice with the URL − http://localhost:8080/hello/hello\"\n },\n {\n \"code\": null,\n \"e\": 6039,\n \"s\": 5979,\n \"text\": \"Now let us create a test plan to test the above webservice.\"\n },\n {\n \"code\": null,\n \"e\": 6120,\n \"s\": 6039,\n \"text\": \"Open the JMeter window by clicking /home/manisha/apache-jmeter2.9/bin/jmeter.sh.\"\n },\n {\n \"code\": null,\n \"e\": 6201,\n \"s\": 6120,\n \"text\": \"Open the JMeter window by clicking /home/manisha/apache-jmeter2.9/bin/jmeter.sh.\"\n },\n {\n \"code\": null,\n \"e\": 6227,\n \"s\": 6201,\n \"text\": \"Click the Test Plan node.\"\n },\n {\n \"code\": null,\n \"e\": 6253,\n \"s\": 6227,\n \"text\": \"Click the Test Plan node.\"\n },\n {\n \"code\": null,\n \"e\": 6299,\n \"s\": 6253,\n \"text\": \"Rename this Test Plan node as WebserviceTest.\"\n },\n {\n \"code\": null,\n \"e\": 6345,\n \"s\": 6299,\n \"text\": \"Rename this Test Plan node as WebserviceTest.\"\n },\n {\n \"code\": null,\n \"e\": 6454,\n \"s\": 6345,\n \"text\": \"Add one Thread Group, which is placeholder for all other elements like Samplers, Controllers, and Listeners.\"\n },\n {\n \"code\": null,\n \"e\": 6611,\n \"s\": 6454,\n \"text\": \"Right click on WebserviceTest (our Test Plan) → Add → Threads (Users) → Thread Group. Thread Group will get added under the Test Plan (WebserviceTest) node.\"\n },\n {\n \"code\": null,\n \"e\": 6768,\n \"s\": 6611,\n \"text\": \"Right click on WebserviceTest (our Test Plan) → Add → Threads (Users) → Thread Group. Thread Group will get added under the Test Plan (WebserviceTest) node.\"\n },\n {\n \"code\": null,\n \"e\": 7017,\n \"s\": 6768,\n \"text\": \"Next, let us modify the default properties of the Thread Group to suit our testing. Following properties are changed −\\n\\nName − webservice user\\nNumber of Threads (Users) − 2\\nRamp-Up Period − leave the the default value of 0 seconds.\\nLoop Count − 2\\n\\n\"\n },\n {\n \"code\": null,\n \"e\": 7136,\n \"s\": 7017,\n \"text\": \"Next, let us modify the default properties of the Thread Group to suit our testing. Following properties are changed −\"\n },\n {\n \"code\": null,\n \"e\": 7159,\n \"s\": 7136,\n \"text\": \"Name − webservice user\"\n },\n {\n \"code\": null,\n \"e\": 7182,\n \"s\": 7159,\n \"text\": \"Name − webservice user\"\n },\n {\n \"code\": null,\n \"e\": 7212,\n \"s\": 7182,\n \"text\": \"Number of Threads (Users) − 2\"\n },\n {\n \"code\": null,\n \"e\": 7242,\n \"s\": 7212,\n \"text\": \"Number of Threads (Users) − 2\"\n },\n {\n \"code\": null,\n \"e\": 7301,\n \"s\": 7242,\n \"text\": \"Ramp-Up Period − leave the the default value of 0 seconds.\"\n },\n {\n \"code\": null,\n \"e\": 7360,\n \"s\": 7301,\n \"text\": \"Ramp-Up Period − leave the the default value of 0 seconds.\"\n },\n {\n \"code\": null,\n \"e\": 7375,\n \"s\": 7360,\n \"text\": \"Loop Count − 2\"\n },\n {\n \"code\": null,\n \"e\": 7390,\n \"s\": 7375,\n \"text\": \"Loop Count − 2\"\n },\n {\n \"code\": null,\n \"e\": 7487,\n \"s\": 7390,\n \"text\": \"Now that we have defined the users, it is time to define the tasks that they will be performing.\"\n },\n {\n \"code\": null,\n \"e\": 7530,\n \"s\": 7487,\n \"text\": \"We will add SOAP/XML-RPC Request element −\"\n },\n {\n \"code\": null,\n \"e\": 7576,\n \"s\": 7530,\n \"text\": \"Right-click mouse button to get the Add menu.\"\n },\n {\n \"code\": null,\n \"e\": 7622,\n \"s\": 7576,\n \"text\": \"Right-click mouse button to get the Add menu.\"\n },\n {\n \"code\": null,\n \"e\": 7667,\n \"s\": 7622,\n \"text\": \"Select Add → Sampler → SOAP/XML-RPC Request.\"\n },\n {\n \"code\": null,\n \"e\": 7712,\n \"s\": 7667,\n \"text\": \"Select Add → Sampler → SOAP/XML-RPC Request.\"\n },\n {\n \"code\": null,\n \"e\": 7764,\n \"s\": 7712,\n \"text\": \"Select the SOAP/XML-RPC Request element in the tree\"\n },\n {\n \"code\": null,\n \"e\": 7816,\n \"s\": 7764,\n \"text\": \"Select the SOAP/XML-RPC Request element in the tree\"\n },\n {\n \"code\": null,\n \"e\": 7870,\n \"s\": 7816,\n \"text\": \"Edit the following properties as in the image below −\"\n },\n {\n \"code\": null,\n \"e\": 7924,\n \"s\": 7870,\n \"text\": \"Edit the following properties as in the image below −\"\n },\n {\n \"code\": null,\n \"e\": 8097,\n \"s\": 7924,\n \"text\": \"The following details are entered in this element −\\n\\nName − SOAP/XML-RPC Request\\nURL − http://localhost:8080/hello/hello?wsdl\\nSoap/XML-RPC Data − Enter the below contents\\n\\n\"\n },\n {\n \"code\": null,\n \"e\": 8149,\n \"s\": 8097,\n \"text\": \"The following details are entered in this element −\"\n },\n {\n \"code\": null,\n \"e\": 8177,\n \"s\": 8149,\n \"text\": \"Name − SOAP/XML-RPC Request\"\n },\n {\n \"code\": null,\n \"e\": 8205,\n \"s\": 8177,\n \"text\": \"Name − SOAP/XML-RPC Request\"\n },\n {\n \"code\": null,\n \"e\": 8250,\n \"s\": 8205,\n \"text\": \"URL − http://localhost:8080/hello/hello?wsdl\"\n },\n {\n \"code\": null,\n \"e\": 8295,\n \"s\": 8250,\n \"text\": \"URL − http://localhost:8080/hello/hello?wsdl\"\n },\n {\n \"code\": null,\n \"e\": 8340,\n \"s\": 8295,\n \"text\": \"Soap/XML-RPC Data − Enter the below contents\"\n },\n {\n \"code\": null,\n \"e\": 8385,\n \"s\": 8340,\n \"text\": \"Soap/XML-RPC Data − Enter the below contents\"\n },\n {\n \"code\": null,\n \"e\": 8693,\n \"s\": 8385,\n \"text\": \"\\n \\n\\t\\n \\n \\n Manisha\\n \\n \\n \\n\"\n },\n {\n \"code\": null,\n \"e\": 8894,\n \"s\": 8693,\n \"text\": \"The final element you need to add to your Test Plan is a Listener. This element is responsible for storing all of the results of your HTTP requests in a file and presenting a visual model of the data.\"\n },\n {\n \"code\": null,\n \"e\": 8930,\n \"s\": 8894,\n \"text\": \"Select the webservice user element.\"\n },\n {\n \"code\": null,\n \"e\": 8966,\n \"s\": 8930,\n \"text\": \"Select the webservice user element.\"\n },\n {\n \"code\": null,\n \"e\": 9048,\n \"s\": 8966,\n \"text\": \"Add a View Results Tree listener by selecting Add → Listener → View Results Tree.\"\n },\n {\n \"code\": null,\n \"e\": 9130,\n \"s\": 9048,\n \"text\": \"Add a View Results Tree listener by selecting Add → Listener → View Results Tree.\"\n },\n {\n \"code\": null,\n \"e\": 9232,\n \"s\": 9130,\n \"text\": \"Now save the above test plan as test_webservice.jmx. Execute this test plan using Run → Start option.\"\n },\n {\n \"code\": null,\n \"e\": 9282,\n \"s\": 9232,\n \"text\": \"The following output can be seen in the listener.\"\n },\n {\n \"code\": null,\n \"e\": 9367,\n \"s\": 9282,\n \"text\": \"In the last image, you can see the response message \\\"Hello Manisha to JAX WS world\\\".\"\n },\n {\n \"code\": null,\n \"e\": 9402,\n \"s\": 9367,\n \"text\": \"\\n 59 Lectures \\n 9.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 9416,\n \"s\": 9402,\n \"text\": \" Rahul Shetty\"\n },\n {\n \"code\": null,\n \"e\": 9452,\n \"s\": 9416,\n \"text\": \"\\n 54 Lectures \\n 13.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 9469,\n \"s\": 9452,\n \"text\": \" Wallace Tauriac\"\n },\n {\n \"code\": null,\n \"e\": 9504,\n \"s\": 9469,\n \"text\": \"\\n 23 Lectures \\n 1.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 9516,\n \"s\": 9504,\n \"text\": \" Anuja Jain\"\n },\n {\n \"code\": null,\n \"e\": 9549,\n \"s\": 9516,\n \"text\": \"\\n 12 Lectures \\n 1 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 9563,\n \"s\": 9549,\n \"text\": \" Spotle Learn\"\n },\n {\n \"code\": null,\n \"e\": 9570,\n \"s\": 9563,\n \"text\": \" Print\"\n },\n {\n \"code\": null,\n \"e\": 9581,\n \"s\": 9570,\n \"text\": \" Add Notes\"\n }\n]"}}},{"rowIdx":560,"cells":{"title":{"kind":"string","value":"How to Show All Tables in MySQL using Python? - GeeksforGeeks"},"text":{"kind":"string","value":"29 Sep, 2021\nA connector is employed when we have to use mysql with other programming languages. The work of mysql-connector is to provide access to MySQL Driver to the required language. Thus, it generates a connection between the programming language and the MySQL Server.\nIn order to make python interact with the MySQL database, we use Python-MySQL-Connector. Here we will try implementing SQL queries which will show the names of all the tables present in the database or server.\nSyntax:\nTo show the name of tables present inside a database:\nSHOW Tables;\nTo show the name of tables present inside a server:\nSELECT table_name\nFROM information_schema.tables;\nDatabase in use:\nSchema of the database used\nThe following programs implement the same.\nExample 1: Display table names present inside a database:\nPython3\nimport mysql.connector mydb = mysql.connector.connect( host=\"localhost\", user=\"root\", password=\"\", database=\"gfg\") mycursor = mydb.cursor() mycursor.execute(\"Show tables;\") myresult = mycursor.fetchall() for x in myresult: print(x)\nOutput:\nTable names in gfg database\nExample 2: Display table names present inside a server:\nPython3\nimport mysql.connector mydb = mysql.connector.connect( host=\"localhost\", user=\"root\", password=\"\",) mycursor = mydb.cursor() mycursor.execute(\"SELECT table_name FROM information_schema.tables;\") myresult = mycursor.fetchall() for x in myresult: print(x)\nOutput:\nTable names in server\nsurindertarika1234\nPicked\nPython-mySQL\nPython\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nComments\nOld Comments\nPython Dictionary\nRead a file line by line in Python\nEnumerate() in Python\nHow to Install PIP on Windows ?\nIterate over a list in Python\nDifferent ways to create Pandas Dataframe\nPython String | replace()\nPython program to convert a list to string\nReading and Writing to text files in Python\nsum() function in Python"},"parsed":{"kind":"list like","value":[{"code":null,"e":24765,"s":24737,"text":"\n29 Sep, 2021"},{"code":null,"e":25027,"s":24765,"text":"A connector is employed when we have to use mysql with other programming languages. The work of mysql-connector is to provide access to MySQL Driver to the required language. Thus, it generates a connection between the programming language and the MySQL Server."},{"code":null,"e":25237,"s":25027,"text":"In order to make python interact with the MySQL database, we use Python-MySQL-Connector. Here we will try implementing SQL queries which will show the names of all the tables present in the database or server."},{"code":null,"e":25245,"s":25237,"text":"Syntax:"},{"code":null,"e":25299,"s":25245,"text":"To show the name of tables present inside a database:"},{"code":null,"e":25312,"s":25299,"text":"SHOW Tables;"},{"code":null,"e":25364,"s":25312,"text":"To show the name of tables present inside a server:"},{"code":null,"e":25382,"s":25364,"text":"SELECT table_name"},{"code":null,"e":25414,"s":25382,"text":"FROM information_schema.tables;"},{"code":null,"e":25431,"s":25414,"text":"Database in use:"},{"code":null,"e":25459,"s":25431,"text":"Schema of the database used"},{"code":null,"e":25502,"s":25459,"text":"The following programs implement the same."},{"code":null,"e":25560,"s":25502,"text":"Example 1: Display table names present inside a database:"},{"code":null,"e":25568,"s":25560,"text":"Python3"},{"code":"import mysql.connector mydb = mysql.connector.connect( host=\"localhost\", user=\"root\", password=\"\", database=\"gfg\") mycursor = mydb.cursor() mycursor.execute(\"Show tables;\") myresult = mycursor.fetchall() for x in myresult: print(x)","e":25815,"s":25568,"text":null},{"code":null,"e":25823,"s":25815,"text":"Output:"},{"code":null,"e":25852,"s":25823,"text":"Table names in gfg database"},{"code":null,"e":25908,"s":25852,"text":"Example 2: Display table names present inside a server:"},{"code":null,"e":25916,"s":25908,"text":"Python3"},{"code":"import mysql.connector mydb = mysql.connector.connect( host=\"localhost\", user=\"root\", password=\"\",) mycursor = mydb.cursor() mycursor.execute(\"SELECT table_name FROM information_schema.tables;\") myresult = mycursor.fetchall() for x in myresult: print(x)","e":26174,"s":25916,"text":null},{"code":null,"e":26182,"s":26174,"text":"Output:"},{"code":null,"e":26204,"s":26182,"text":"Table names in server"},{"code":null,"e":26223,"s":26204,"text":"surindertarika1234"},{"code":null,"e":26230,"s":26223,"text":"Picked"},{"code":null,"e":26243,"s":26230,"text":"Python-mySQL"},{"code":null,"e":26250,"s":26243,"text":"Python"},{"code":null,"e":26348,"s":26250,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":26357,"s":26348,"text":"Comments"},{"code":null,"e":26370,"s":26357,"text":"Old Comments"},{"code":null,"e":26388,"s":26370,"text":"Python Dictionary"},{"code":null,"e":26423,"s":26388,"text":"Read a file line by line in Python"},{"code":null,"e":26445,"s":26423,"text":"Enumerate() in Python"},{"code":null,"e":26477,"s":26445,"text":"How to Install PIP on Windows ?"},{"code":null,"e":26507,"s":26477,"text":"Iterate over a list in Python"},{"code":null,"e":26549,"s":26507,"text":"Different ways to create Pandas Dataframe"},{"code":null,"e":26575,"s":26549,"text":"Python String | replace()"},{"code":null,"e":26618,"s":26575,"text":"Python program to convert a list to string"},{"code":null,"e":26662,"s":26618,"text":"Reading and Writing to text files in Python"}],"string":"[\n {\n \"code\": null,\n \"e\": 24765,\n \"s\": 24737,\n \"text\": \"\\n29 Sep, 2021\"\n },\n {\n \"code\": null,\n \"e\": 25027,\n \"s\": 24765,\n \"text\": \"A connector is employed when we have to use mysql with other programming languages. The work of mysql-connector is to provide access to MySQL Driver to the required language. Thus, it generates a connection between the programming language and the MySQL Server.\"\n },\n {\n \"code\": null,\n \"e\": 25237,\n \"s\": 25027,\n \"text\": \"In order to make python interact with the MySQL database, we use Python-MySQL-Connector. Here we will try implementing SQL queries which will show the names of all the tables present in the database or server.\"\n },\n {\n \"code\": null,\n \"e\": 25245,\n \"s\": 25237,\n \"text\": \"Syntax:\"\n },\n {\n \"code\": null,\n \"e\": 25299,\n \"s\": 25245,\n \"text\": \"To show the name of tables present inside a database:\"\n },\n {\n \"code\": null,\n \"e\": 25312,\n \"s\": 25299,\n \"text\": \"SHOW Tables;\"\n },\n {\n \"code\": null,\n \"e\": 25364,\n \"s\": 25312,\n \"text\": \"To show the name of tables present inside a server:\"\n },\n {\n \"code\": null,\n \"e\": 25382,\n \"s\": 25364,\n \"text\": \"SELECT table_name\"\n },\n {\n \"code\": null,\n \"e\": 25414,\n \"s\": 25382,\n \"text\": \"FROM information_schema.tables;\"\n },\n {\n \"code\": null,\n \"e\": 25431,\n \"s\": 25414,\n \"text\": \"Database in use:\"\n },\n {\n \"code\": null,\n \"e\": 25459,\n \"s\": 25431,\n \"text\": \"Schema of the database used\"\n },\n {\n \"code\": null,\n \"e\": 25502,\n \"s\": 25459,\n \"text\": \"The following programs implement the same.\"\n },\n {\n \"code\": null,\n \"e\": 25560,\n \"s\": 25502,\n \"text\": \"Example 1: Display table names present inside a database:\"\n },\n {\n \"code\": null,\n \"e\": 25568,\n \"s\": 25560,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"import mysql.connector mydb = mysql.connector.connect( host=\\\"localhost\\\", user=\\\"root\\\", password=\\\"\\\", database=\\\"gfg\\\") mycursor = mydb.cursor() mycursor.execute(\\\"Show tables;\\\") myresult = mycursor.fetchall() for x in myresult: print(x)\",\n \"e\": 25815,\n \"s\": 25568,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25823,\n \"s\": 25815,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25852,\n \"s\": 25823,\n \"text\": \"Table names in gfg database\"\n },\n {\n \"code\": null,\n \"e\": 25908,\n \"s\": 25852,\n \"text\": \"Example 2: Display table names present inside a server:\"\n },\n {\n \"code\": null,\n \"e\": 25916,\n \"s\": 25908,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"import mysql.connector mydb = mysql.connector.connect( host=\\\"localhost\\\", user=\\\"root\\\", password=\\\"\\\",) mycursor = mydb.cursor() mycursor.execute(\\\"SELECT table_name FROM information_schema.tables;\\\") myresult = mycursor.fetchall() for x in myresult: print(x)\",\n \"e\": 26174,\n \"s\": 25916,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26182,\n \"s\": 26174,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 26204,\n \"s\": 26182,\n \"text\": \"Table names in server\"\n },\n {\n \"code\": null,\n \"e\": 26223,\n \"s\": 26204,\n \"text\": \"surindertarika1234\"\n },\n {\n \"code\": null,\n \"e\": 26230,\n \"s\": 26223,\n \"text\": \"Picked\"\n },\n {\n \"code\": null,\n \"e\": 26243,\n \"s\": 26230,\n \"text\": \"Python-mySQL\"\n },\n {\n \"code\": null,\n \"e\": 26250,\n \"s\": 26243,\n \"text\": \"Python\"\n },\n {\n \"code\": null,\n \"e\": 26348,\n \"s\": 26250,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 26357,\n \"s\": 26348,\n \"text\": \"Comments\"\n },\n {\n \"code\": null,\n \"e\": 26370,\n \"s\": 26357,\n \"text\": \"Old Comments\"\n },\n {\n \"code\": null,\n \"e\": 26388,\n \"s\": 26370,\n \"text\": \"Python Dictionary\"\n },\n {\n \"code\": null,\n \"e\": 26423,\n \"s\": 26388,\n \"text\": \"Read a file line by line in Python\"\n },\n {\n \"code\": null,\n \"e\": 26445,\n \"s\": 26423,\n \"text\": \"Enumerate() in Python\"\n },\n {\n \"code\": null,\n \"e\": 26477,\n \"s\": 26445,\n \"text\": \"How to Install PIP on Windows ?\"\n },\n {\n \"code\": null,\n \"e\": 26507,\n \"s\": 26477,\n \"text\": \"Iterate over a list in Python\"\n },\n {\n \"code\": null,\n \"e\": 26549,\n \"s\": 26507,\n \"text\": \"Different ways to create Pandas Dataframe\"\n },\n {\n \"code\": null,\n \"e\": 26575,\n \"s\": 26549,\n \"text\": \"Python String | replace()\"\n },\n {\n \"code\": null,\n \"e\": 26618,\n \"s\": 26575,\n \"text\": \"Python program to convert a list to string\"\n },\n {\n \"code\": null,\n \"e\": 26662,\n \"s\": 26618,\n \"text\": \"Reading and Writing to text files in Python\"\n }\n]"}}},{"rowIdx":561,"cells":{"title":{"kind":"string","value":"Convert a Number to Hexadecimal in C++"},"text":{"kind":"string","value":"Suppose we have an integer; we have to devise an algorithm to convert it to hexadecimal. For negative numbers we will use the two’s complement method.\nSo, if the input is like 254 and -12, then the output will be fe and fffffff4 respectively.\nTo solve this, we will follow these steps −\nif num1 is same as 0, then −return \"0\"\nif num1 is same as 0, then −\nreturn \"0\"\nreturn \"0\"\nnum := num1\nnum := num1\ns := blank string\ns := blank string\nwhile num is non-zero, do −temp := num mod 16if temp <= 9, then −s := s + temp as numeric characterOtherwises := s + temp as alphabetnum := num / 16\nwhile num is non-zero, do −\ntemp := num mod 16\ntemp := num mod 16\nif temp <= 9, then −s := s + temp as numeric character\nif temp <= 9, then −\ns := s + temp as numeric character\ns := s + temp as numeric character\nOtherwises := s + temp as alphabet\nOtherwise\ns := s + temp as alphabet\ns := s + temp as alphabet\nnum := num / 16\nnum := num / 16\nreverse the array s\nreverse the array s\nreturn s\nreturn s\nLet us see the following implementation to get a better understanding −\n Live Demo\n#include \nusing namespace std;\nclass Solution {\npublic:\n string toHex(int num1){\n if (num1 == 0)\n return \"0\";\n u_int num = num1;\n string s = \"\";\n while (num) {\n int temp = num % 16;\n if (temp <= 9)\n s += (48 + temp);\n else\n s += (87 + temp);\n num = num / 16;\n }\n reverse(s.begin(), s.end());\n return s;\n }\n};\nmain(){\n Solution ob;\n cout << (ob.toHex(254)) << endl;\n cout << (ob.toHex(-12));\n}\n254\n-12\nfe\nfffffff4"},"parsed":{"kind":"list like","value":[{"code":null,"e":1213,"s":1062,"text":"Suppose we have an integer; we have to devise an algorithm to convert it to hexadecimal. For negative numbers we will use the two’s complement method."},{"code":null,"e":1305,"s":1213,"text":"So, if the input is like 254 and -12, then the output will be fe and fffffff4 respectively."},{"code":null,"e":1349,"s":1305,"text":"To solve this, we will follow these steps −"},{"code":null,"e":1388,"s":1349,"text":"if num1 is same as 0, then −return \"0\""},{"code":null,"e":1417,"s":1388,"text":"if num1 is same as 0, then −"},{"code":null,"e":1428,"s":1417,"text":"return \"0\""},{"code":null,"e":1439,"s":1428,"text":"return \"0\""},{"code":null,"e":1451,"s":1439,"text":"num := num1"},{"code":null,"e":1463,"s":1451,"text":"num := num1"},{"code":null,"e":1481,"s":1463,"text":"s := blank string"},{"code":null,"e":1499,"s":1481,"text":"s := blank string"},{"code":null,"e":1648,"s":1499,"text":"while num is non-zero, do −temp := num mod 16if temp <= 9, then −s := s + temp as numeric characterOtherwises := s + temp as alphabetnum := num / 16"},{"code":null,"e":1676,"s":1648,"text":"while num is non-zero, do −"},{"code":null,"e":1695,"s":1676,"text":"temp := num mod 16"},{"code":null,"e":1714,"s":1695,"text":"temp := num mod 16"},{"code":null,"e":1769,"s":1714,"text":"if temp <= 9, then −s := s + temp as numeric character"},{"code":null,"e":1790,"s":1769,"text":"if temp <= 9, then −"},{"code":null,"e":1825,"s":1790,"text":"s := s + temp as numeric character"},{"code":null,"e":1860,"s":1825,"text":"s := s + temp as numeric character"},{"code":null,"e":1895,"s":1860,"text":"Otherwises := s + temp as alphabet"},{"code":null,"e":1905,"s":1895,"text":"Otherwise"},{"code":null,"e":1931,"s":1905,"text":"s := s + temp as alphabet"},{"code":null,"e":1957,"s":1931,"text":"s := s + temp as alphabet"},{"code":null,"e":1973,"s":1957,"text":"num := num / 16"},{"code":null,"e":1989,"s":1973,"text":"num := num / 16"},{"code":null,"e":2009,"s":1989,"text":"reverse the array s"},{"code":null,"e":2029,"s":2009,"text":"reverse the array s"},{"code":null,"e":2038,"s":2029,"text":"return s"},{"code":null,"e":2047,"s":2038,"text":"return s"},{"code":null,"e":2119,"s":2047,"text":"Let us see the following implementation to get a better understanding −"},{"code":null,"e":2130,"s":2119,"text":" Live Demo"},{"code":null,"e":2645,"s":2130,"text":"#include \nusing namespace std;\nclass Solution {\npublic:\n string toHex(int num1){\n if (num1 == 0)\n return \"0\";\n u_int num = num1;\n string s = \"\";\n while (num) {\n int temp = num % 16;\n if (temp <= 9)\n s += (48 + temp);\n else\n s += (87 + temp);\n num = num / 16;\n }\n reverse(s.begin(), s.end());\n return s;\n }\n};\nmain(){\n Solution ob;\n cout << (ob.toHex(254)) << endl;\n cout << (ob.toHex(-12));\n}"},{"code":null,"e":2653,"s":2645,"text":"254\n-12"},{"code":null,"e":2665,"s":2653,"text":"fe\nfffffff4"}],"string":"[\n {\n \"code\": null,\n \"e\": 1213,\n \"s\": 1062,\n \"text\": \"Suppose we have an integer; we have to devise an algorithm to convert it to hexadecimal. For negative numbers we will use the two’s complement method.\"\n },\n {\n \"code\": null,\n \"e\": 1305,\n \"s\": 1213,\n \"text\": \"So, if the input is like 254 and -12, then the output will be fe and fffffff4 respectively.\"\n },\n {\n \"code\": null,\n \"e\": 1349,\n \"s\": 1305,\n \"text\": \"To solve this, we will follow these steps −\"\n },\n {\n \"code\": null,\n \"e\": 1388,\n \"s\": 1349,\n \"text\": \"if num1 is same as 0, then −return \\\"0\\\"\"\n },\n {\n \"code\": null,\n \"e\": 1417,\n \"s\": 1388,\n \"text\": \"if num1 is same as 0, then −\"\n },\n {\n \"code\": null,\n \"e\": 1428,\n \"s\": 1417,\n \"text\": \"return \\\"0\\\"\"\n },\n {\n \"code\": null,\n \"e\": 1439,\n \"s\": 1428,\n \"text\": \"return \\\"0\\\"\"\n },\n {\n \"code\": null,\n \"e\": 1451,\n \"s\": 1439,\n \"text\": \"num := num1\"\n },\n {\n \"code\": null,\n \"e\": 1463,\n \"s\": 1451,\n \"text\": \"num := num1\"\n },\n {\n \"code\": null,\n \"e\": 1481,\n \"s\": 1463,\n \"text\": \"s := blank string\"\n },\n {\n \"code\": null,\n \"e\": 1499,\n \"s\": 1481,\n \"text\": \"s := blank string\"\n },\n {\n \"code\": null,\n \"e\": 1648,\n \"s\": 1499,\n \"text\": \"while num is non-zero, do −temp := num mod 16if temp <= 9, then −s := s + temp as numeric characterOtherwises := s + temp as alphabetnum := num / 16\"\n },\n {\n \"code\": null,\n \"e\": 1676,\n \"s\": 1648,\n \"text\": \"while num is non-zero, do −\"\n },\n {\n \"code\": null,\n \"e\": 1695,\n \"s\": 1676,\n \"text\": \"temp := num mod 16\"\n },\n {\n \"code\": null,\n \"e\": 1714,\n \"s\": 1695,\n \"text\": \"temp := num mod 16\"\n },\n {\n \"code\": null,\n \"e\": 1769,\n \"s\": 1714,\n \"text\": \"if temp <= 9, then −s := s + temp as numeric character\"\n },\n {\n \"code\": null,\n \"e\": 1790,\n \"s\": 1769,\n \"text\": \"if temp <= 9, then −\"\n },\n {\n \"code\": null,\n \"e\": 1825,\n \"s\": 1790,\n \"text\": \"s := s + temp as numeric character\"\n },\n {\n \"code\": null,\n \"e\": 1860,\n \"s\": 1825,\n \"text\": \"s := s + temp as numeric character\"\n },\n {\n \"code\": null,\n \"e\": 1895,\n \"s\": 1860,\n \"text\": \"Otherwises := s + temp as alphabet\"\n },\n {\n \"code\": null,\n \"e\": 1905,\n \"s\": 1895,\n \"text\": \"Otherwise\"\n },\n {\n \"code\": null,\n \"e\": 1931,\n \"s\": 1905,\n \"text\": \"s := s + temp as alphabet\"\n },\n {\n \"code\": null,\n \"e\": 1957,\n \"s\": 1931,\n \"text\": \"s := s + temp as alphabet\"\n },\n {\n \"code\": null,\n \"e\": 1973,\n \"s\": 1957,\n \"text\": \"num := num / 16\"\n },\n {\n \"code\": null,\n \"e\": 1989,\n \"s\": 1973,\n \"text\": \"num := num / 16\"\n },\n {\n \"code\": null,\n \"e\": 2009,\n \"s\": 1989,\n \"text\": \"reverse the array s\"\n },\n {\n \"code\": null,\n \"e\": 2029,\n \"s\": 2009,\n \"text\": \"reverse the array s\"\n },\n {\n \"code\": null,\n \"e\": 2038,\n \"s\": 2029,\n \"text\": \"return s\"\n },\n {\n \"code\": null,\n \"e\": 2047,\n \"s\": 2038,\n \"text\": \"return s\"\n },\n {\n \"code\": null,\n \"e\": 2119,\n \"s\": 2047,\n \"text\": \"Let us see the following implementation to get a better understanding −\"\n },\n {\n \"code\": null,\n \"e\": 2130,\n \"s\": 2119,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 2645,\n \"s\": 2130,\n \"text\": \"#include \\nusing namespace std;\\nclass Solution {\\npublic:\\n string toHex(int num1){\\n if (num1 == 0)\\n return \\\"0\\\";\\n u_int num = num1;\\n string s = \\\"\\\";\\n while (num) {\\n int temp = num % 16;\\n if (temp <= 9)\\n s += (48 + temp);\\n else\\n s += (87 + temp);\\n num = num / 16;\\n }\\n reverse(s.begin(), s.end());\\n return s;\\n }\\n};\\nmain(){\\n Solution ob;\\n cout << (ob.toHex(254)) << endl;\\n cout << (ob.toHex(-12));\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2653,\n \"s\": 2645,\n \"text\": \"254\\n-12\"\n },\n {\n \"code\": null,\n \"e\": 2665,\n \"s\": 2653,\n \"text\": \"fe\\nfffffff4\"\n }\n]"}}},{"rowIdx":562,"cells":{"title":{"kind":"string","value":"Track COVID-19 Data Yourself Using R | by Chris Ross | Towards Data Science"},"text":{"kind":"string","value":"When the global pandemic was first gaining momentum earlier this year, I relied on the media for updates, as most people do. I soon found that the media reports were not only inconsistent but often presented incomplete information. The fundamental issue with these limitations is that they frequently lead to misunderstandings and misinterpretations of the data, as evidenced by the widely divergent views on the severity of this global crisis at any given point in time.\nMedia organizations often cherry-pick the stats and graphs they report with a preference for sensational headlines that align with the story they are trying to tell. You can’t blame them too much for this tendency given their primary goal is to engage their audience. But if you’re hoping to get an accurate and reliable picture of the current impact of COVID-19 in your country, including tracking and charting trends, you can’t beat analyzing the source data yourself.\nIn this article, we’re going to cover how to write a script in R to pull and analyze current coronavirus data. The best part is that once you’ve written the script, you can easily save and rerun it at will. You no longer need to rely on incomplete snapshots of the data based on what others decided to analyze and report. Now you’ll quickly and easily have the most recent COVID-19 data at your fingertips and can track current stats and trends yourself!\nI use R Studio and have this script saved in R markdown (Rmd). If you prefer to use base R, no worries, the code will work in a standard R script as well. I’m not going to cover how to install R [1] or R Studio [2], but both are open source (and free) and you can check the references below to find detailed documentation and installation instructions.\nFirst, open a new Rmd (or R) file. If you’re using Rmd, use Ctrl + Alt + I (Mac: Cmd + Option + I) to insert a code chunk. In this new chunk, we’ll load the packages we’re going to use. Note that you’ll first need to install these two packages if you haven’t already. I included the code below, to both install and load them, though you’ll need to uncomment the install lines (remove the hashtags) first if you need to install them. You only need to install packages once, so feel free to delete those commented lines once you’ve installed them.\n# install.packages('lubridate')# install.packages('tidyverse')library(lubridate)library(tidyverse)\nWe won’t go into too much detail about what each package does because it’s beyond the scope of this article. I will, however, provide a brief description of them below and include links to their documentation in the references if you want to learn more. I highly recommend you take a closer look at the docs for these packages if you plan to use R regularly. They are awesome and worth learning!\nThe first package is lubridate, which provides a lot of useful functions for working with date variables [3]. For our purposes, we’re just using lubridate to convert the date format provided in the dataset into a date variable recognized by R. The other package, tidyverse, is actually a group of packages that make up the “Tidyverse” [4]. When you install (and load) tidyverse, the entire group of included packages is automatically installed (and loaded) for you. Two of these included packages are worth mentioning because we’ll use them a few times, dplyr and ggplot2.\nThe dplyr package “is a grammar of data manipulation, providing a consistent set of verbs that help you solve the most common data manipulation challenges” [5]. There could be a full book just covering how to use dplr, so that description will have to suffice for now. Ggplot2 is another amazing package used for plotting graphs and charts [6]. I took a full semester data visualization class in grad school that exclusively used ggplot2, and we only scratched the surface of available functionality. After installing and loading these two packages, we’re ready to dig in and start exploring the data!\nWhen first looking for available coronavirus datasets earlier this year, I checked out several different options. After a few weeks of testing these options out, I developed a preference for the European Center for Disease Control (ECDC) data [7]. It’s consistently updated (nightly) and offers most of the info I was looking for, including daily case counts, deaths, and population info, broken down by country/territory. Over the past 4–5 months, they’ve updated the dataset a few times changing the formatting and adding columns. But overall they’ve been remarkably consistent in maintaining and updating the data. Thus, the ECDC dataset has been my go-to source for COVID-19 tracking, and it’s what we’ll use here.\n# FROM: https://www.ecdc.europa.eudata <- read.csv(“https://opendata.ecdc.europa.eu/covid19/casedistribution/csv\", na.strings = “”, fileEncoding = “UTF-8-BOM”, stringsAsFactors = F)data# convert date formatdata$date_reported <- mdy(paste0(data$month,”-”,data$day,”-”,data$year))\nThe ECDC provides a handy URL for their dataset in CSV format which we can easily pull into R using the built-in “read.csv” function. The nice thing about this link is that they update the dataset daily using the same URL so you never need to change the code to import the dataset. After running the read.csv line (Ctrl + Enter to run a single line or highlighted selection), the dataset is saved in the “data” variable. Run the next line that just says “data” to take a peek at the raw dataset.\nAs you can see, the “dateRep” column formats the date in a rather unique manner using DD/MM/YYYY. The last line of code in the above chunk essentially just converts that date string into a format that R (and ggplot2) can read. There are numerous ways to accomplish this task but for simplicity, we’ll using the mdy() function included in the lubridate package [3].\nLet’s start by taking a look at the cumulative total cases and deaths worldwide. The first line will sum the total number of COVID-19 cases globally to date. Next, we’ll break it down by country and calculate the total number of cases in that country, as well as the maximum number of cases reported in a single day in each country. We then sort the results by total cases in descending order. Below that, we will calculate the same thing but for coronavirus deaths this time (instead of new cases).\n# total cases worldwide to datesum(data$cases)# total cases and max single day by countrydata %>% group_by(countriesAndTerritories) %>% summarise(cases_sum = sum(cases), cases_max = max(cases)) %>% arrange(desc(cases_sum))# total deaths worldwide to datesum(data$deaths)# total deaths and max single day by countrydata %>% group_by(countriesAndTerritories) %>% summarise(deaths_sum = sum(deaths), deaths_max = max(deaths)) %>% arrange(desc(deaths_sum))\nThere will be four outputs from the above chunk of code. Below is the last output, showing the total COVID-19 deaths by country, as well as the max number of deaths in a single day. Currently (Aug. 2020), the US has the most coronavirus deaths by a fairly wide margin.\nNow we’ll start plotting the data to identify trends. Since I live in the US, I’m going to plot US cases. You can easily modify the code using other countries, which we’ll cover shortly.\nus <- data[data$countriesAndTerritories == ‘United_States_of_America’,]usUS_cases <- ggplot(us, aes(date_reported, as.numeric(cases))) + geom_col(fill = ‘blue’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)US_cases + labs(title=”Daily COVID-19 Cases in US”)\nFirst, I filter the dataset to just look at US cases and store that to a variable. I then use ggplot2 to plot the daily new cases of COVID-19. For more info on how to use ggplot2, check out their documentation [6]. After running the above chunk, you should see the below plot.\nAwesome, right? Of course, the picture painted by the data is not awesome, but the fact that you can track it yourself certainly is!\nLet’s move on and do the same thing for coronavirus deaths. The code is virtually the same except we are now tracking “deaths” instead of “cases.”\nUS_deaths <- ggplot(us, aes(date_reported, as.numeric(deaths))) + geom_col(fill = ‘purple’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)US_deaths + labs(title=”Daily COVID-19 Deaths in US”)\nAs you can see, the death rate paints a different picture than the case counts. While the 2nd wave of cases was twice the size of the 1st, the 2nd wave of deaths hasn’t eclipsed the 1st (yet). These are interesting trends to watch unfold and are definitely worth tracking.\nHold onto your hats, because the last plot we’re going to cover will allow us to compare different countries! I chose the US, China, Italy, and Spain, but you can mix it up and choose whatever countries/territories you’re interested in.\n# Now lets add in a few more countrieschina <- data[data$countriesAndTerritories == ‘China’,]spain <- data[data$countriesAndTerritories == ‘Spain’,]italy <- data[data$countriesAndTerritories == ‘Italy’,]USplot <- ggplot(us, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000))) + geom_col(fill = ‘blue’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)China_US <- USplot + geom_col(data=china, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”red”, alpha = 0.5)Ch_US_Sp <- China_US + geom_col(data=spain, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”#E69F00\", alpha = 0.4)Chn_US_Sp_It <- Ch_US_Sp + geom_col(data=italy, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”#009E73\", alpha = 0.9)Chn_US_Sp_It + labs(title=”China, US, Italy, & Spain”)\nThis code chunk looks a bit more intimidating, but it’s actually pretty straightforward. At the top we filter the dataset for the countries we’re interested in and save each in its own variable. The next series of code blocks create the plots that we are going to stack, one for each country we add.\nConsidering we want to compare countries, we will use a new column this time, one that lists the cumulative number of new cases over the past 14 days per 100,000 people in that country. Since the population of each country varies, using the number of cases per 100,000 people allows us to standardize the case counts based on the population. Let’s see how that looks.\nThe little red hump (in the beginning) is China, with Italy in green, Spain in yellow, and the US in blue. As you can see, China was hit first although the overall impact was significantly lower than the other three countries (based on the numbers they reported at least). Sadly, the US still looks in pretty bad shape comparatively. Of these four countries, Spain was hit hardest by the first wave while the US and Italy appear similarly impacted.\nUnfortunately, the 2nd wave hit the US like a truck (in terms of cumulative cases), although we appear to be starting a downtrend (fingers crossed). Spain’s 2nd wave appears to still be ramping up. We’ll have to continue monitoring their data to see when they start flattening the curve. Italy, on the other hand, appears to have done an excellent job curbing the spread of COVID-19 after their first wave. There may be lessons we can learn from their success.\nThis is just the beginning. We’ve only scratched the surface of what you can do with this data. I hope you take what we covered here and run with it! Let it be a springboard for your own discovery. You no longer need to rely on snippets from the media, or other data scientists, to stay on top of the COVID-19 data and track trends. Let me know in the comments what you do with the data, and share with us what you discover!\n✍️ Subscribe to get my newest articles, featured in publications like The Startup & Towards Data Science ➡️\n[1] R, “R: The R Project for Statistical Computing” r-project.org, 2020. [Online]. Available: https://www.r-project.org. [Accessed: Aug. 10, 2020].\n[2] R Studio, “R Studio IDE Desktop” rstudio.com, 2020. [Online]. Available: https://rstudio.com/products/rstudio. [Accessed: Aug. 10, 2020].\n[3] lubridate package | R Documentation, “Make Dealing with Dates a Little Easier” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/lubridate/versions/1.7.9. [Accessed: Aug. 10, 2020].\n[4] tidyverse package | R Documentation, “Easily Install and Load the ‘Tidyverse’” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/tidyverse/versions/1.3.0. [Accessed: Aug. 10, 2020].\n[5] dplyr package | R Documentation, “A Grammar of Data Manipulation” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/dplyr/versions/0.7.8. [Accessed: Aug. 10, 2020].\n[6] ggplot2 package | R Documentation, “Create Elegant Data Visualisations Using the Grammar of Graphics” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/ggplot2/versions/3.3.2. [Accessed: Aug. 10, 2020].\n[7] European CDC, “ ECDC COVID-19 pandemic” ecdc.europa.eu, 2020. [Online]. Available: https://www.ecdc.europa.eu/en/covid-19-pandemic. [Accessed: Aug. 10, 2020]."},"parsed":{"kind":"list like","value":[{"code":null,"e":644,"s":172,"text":"When the global pandemic was first gaining momentum earlier this year, I relied on the media for updates, as most people do. I soon found that the media reports were not only inconsistent but often presented incomplete information. The fundamental issue with these limitations is that they frequently lead to misunderstandings and misinterpretations of the data, as evidenced by the widely divergent views on the severity of this global crisis at any given point in time."},{"code":null,"e":1115,"s":644,"text":"Media organizations often cherry-pick the stats and graphs they report with a preference for sensational headlines that align with the story they are trying to tell. You can’t blame them too much for this tendency given their primary goal is to engage their audience. But if you’re hoping to get an accurate and reliable picture of the current impact of COVID-19 in your country, including tracking and charting trends, you can’t beat analyzing the source data yourself."},{"code":null,"e":1570,"s":1115,"text":"In this article, we’re going to cover how to write a script in R to pull and analyze current coronavirus data. The best part is that once you’ve written the script, you can easily save and rerun it at will. You no longer need to rely on incomplete snapshots of the data based on what others decided to analyze and report. Now you’ll quickly and easily have the most recent COVID-19 data at your fingertips and can track current stats and trends yourself!"},{"code":null,"e":1923,"s":1570,"text":"I use R Studio and have this script saved in R markdown (Rmd). If you prefer to use base R, no worries, the code will work in a standard R script as well. I’m not going to cover how to install R [1] or R Studio [2], but both are open source (and free) and you can check the references below to find detailed documentation and installation instructions."},{"code":null,"e":2469,"s":1923,"text":"First, open a new Rmd (or R) file. If you’re using Rmd, use Ctrl + Alt + I (Mac: Cmd + Option + I) to insert a code chunk. In this new chunk, we’ll load the packages we’re going to use. Note that you’ll first need to install these two packages if you haven’t already. I included the code below, to both install and load them, though you’ll need to uncomment the install lines (remove the hashtags) first if you need to install them. You only need to install packages once, so feel free to delete those commented lines once you’ve installed them."},{"code":null,"e":2568,"s":2469,"text":"# install.packages('lubridate')# install.packages('tidyverse')library(lubridate)library(tidyverse)"},{"code":null,"e":2964,"s":2568,"text":"We won’t go into too much detail about what each package does because it’s beyond the scope of this article. I will, however, provide a brief description of them below and include links to their documentation in the references if you want to learn more. I highly recommend you take a closer look at the docs for these packages if you plan to use R regularly. They are awesome and worth learning!"},{"code":null,"e":3537,"s":2964,"text":"The first package is lubridate, which provides a lot of useful functions for working with date variables [3]. For our purposes, we’re just using lubridate to convert the date format provided in the dataset into a date variable recognized by R. The other package, tidyverse, is actually a group of packages that make up the “Tidyverse” [4]. When you install (and load) tidyverse, the entire group of included packages is automatically installed (and loaded) for you. Two of these included packages are worth mentioning because we’ll use them a few times, dplyr and ggplot2."},{"code":null,"e":4139,"s":3537,"text":"The dplyr package “is a grammar of data manipulation, providing a consistent set of verbs that help you solve the most common data manipulation challenges” [5]. There could be a full book just covering how to use dplr, so that description will have to suffice for now. Ggplot2 is another amazing package used for plotting graphs and charts [6]. I took a full semester data visualization class in grad school that exclusively used ggplot2, and we only scratched the surface of available functionality. After installing and loading these two packages, we’re ready to dig in and start exploring the data!"},{"code":null,"e":4858,"s":4139,"text":"When first looking for available coronavirus datasets earlier this year, I checked out several different options. After a few weeks of testing these options out, I developed a preference for the European Center for Disease Control (ECDC) data [7]. It’s consistently updated (nightly) and offers most of the info I was looking for, including daily case counts, deaths, and population info, broken down by country/territory. Over the past 4–5 months, they’ve updated the dataset a few times changing the formatting and adding columns. But overall they’ve been remarkably consistent in maintaining and updating the data. Thus, the ECDC dataset has been my go-to source for COVID-19 tracking, and it’s what we’ll use here."},{"code":null,"e":5137,"s":4858,"text":"# FROM: https://www.ecdc.europa.eudata <- read.csv(“https://opendata.ecdc.europa.eu/covid19/casedistribution/csv\", na.strings = “”, fileEncoding = “UTF-8-BOM”, stringsAsFactors = F)data# convert date formatdata$date_reported <- mdy(paste0(data$month,”-”,data$day,”-”,data$year))"},{"code":null,"e":5633,"s":5137,"text":"The ECDC provides a handy URL for their dataset in CSV format which we can easily pull into R using the built-in “read.csv” function. The nice thing about this link is that they update the dataset daily using the same URL so you never need to change the code to import the dataset. After running the read.csv line (Ctrl + Enter to run a single line or highlighted selection), the dataset is saved in the “data” variable. Run the next line that just says “data” to take a peek at the raw dataset."},{"code":null,"e":5998,"s":5633,"text":"As you can see, the “dateRep” column formats the date in a rather unique manner using DD/MM/YYYY. The last line of code in the above chunk essentially just converts that date string into a format that R (and ggplot2) can read. There are numerous ways to accomplish this task but for simplicity, we’ll using the mdy() function included in the lubridate package [3]."},{"code":null,"e":6498,"s":5998,"text":"Let’s start by taking a look at the cumulative total cases and deaths worldwide. The first line will sum the total number of COVID-19 cases globally to date. Next, we’ll break it down by country and calculate the total number of cases in that country, as well as the maximum number of cases reported in a single day in each country. We then sort the results by total cases in descending order. Below that, we will calculate the same thing but for coronavirus deaths this time (instead of new cases)."},{"code":null,"e":6957,"s":6498,"text":"# total cases worldwide to datesum(data$cases)# total cases and max single day by countrydata %>% group_by(countriesAndTerritories) %>% summarise(cases_sum = sum(cases), cases_max = max(cases)) %>% arrange(desc(cases_sum))# total deaths worldwide to datesum(data$deaths)# total deaths and max single day by countrydata %>% group_by(countriesAndTerritories) %>% summarise(deaths_sum = sum(deaths), deaths_max = max(deaths)) %>% arrange(desc(deaths_sum))"},{"code":null,"e":7226,"s":6957,"text":"There will be four outputs from the above chunk of code. Below is the last output, showing the total COVID-19 deaths by country, as well as the max number of deaths in a single day. Currently (Aug. 2020), the US has the most coronavirus deaths by a fairly wide margin."},{"code":null,"e":7413,"s":7226,"text":"Now we’ll start plotting the data to identify trends. Since I live in the US, I’m going to plot US cases. You can easily modify the code using other countries, which we’ll cover shortly."},{"code":null,"e":7740,"s":7413,"text":"us <- data[data$countriesAndTerritories == ‘United_States_of_America’,]usUS_cases <- ggplot(us, aes(date_reported, as.numeric(cases))) + geom_col(fill = ‘blue’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)US_cases + labs(title=”Daily COVID-19 Cases in US”)"},{"code":null,"e":8017,"s":7740,"text":"First, I filter the dataset to just look at US cases and store that to a variable. I then use ggplot2 to plot the daily new cases of COVID-19. For more info on how to use ggplot2, check out their documentation [6]. After running the above chunk, you should see the below plot."},{"code":null,"e":8150,"s":8017,"text":"Awesome, right? Of course, the picture painted by the data is not awesome, but the fact that you can track it yourself certainly is!"},{"code":null,"e":8297,"s":8150,"text":"Let’s move on and do the same thing for coronavirus deaths. The code is virtually the same except we are now tracking “deaths” instead of “cases.”"},{"code":null,"e":8557,"s":8297,"text":"US_deaths <- ggplot(us, aes(date_reported, as.numeric(deaths))) + geom_col(fill = ‘purple’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)US_deaths + labs(title=”Daily COVID-19 Deaths in US”)"},{"code":null,"e":8830,"s":8557,"text":"As you can see, the death rate paints a different picture than the case counts. While the 2nd wave of cases was twice the size of the 1st, the 2nd wave of deaths hasn’t eclipsed the 1st (yet). These are interesting trends to watch unfold and are definitely worth tracking."},{"code":null,"e":9067,"s":8830,"text":"Hold onto your hats, because the last plot we’re going to cover will allow us to compare different countries! I chose the US, China, Italy, and Spain, but you can mix it up and choose whatever countries/territories you’re interested in."},{"code":null,"e":10071,"s":9067,"text":"# Now lets add in a few more countrieschina <- data[data$countriesAndTerritories == ‘China’,]spain <- data[data$countriesAndTerritories == ‘Spain’,]italy <- data[data$countriesAndTerritories == ‘Italy’,]USplot <- ggplot(us, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000))) + geom_col(fill = ‘blue’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)China_US <- USplot + geom_col(data=china, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”red”, alpha = 0.5)Ch_US_Sp <- China_US + geom_col(data=spain, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”#E69F00\", alpha = 0.4)Chn_US_Sp_It <- Ch_US_Sp + geom_col(data=italy, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”#009E73\", alpha = 0.9)Chn_US_Sp_It + labs(title=”China, US, Italy, & Spain”)"},{"code":null,"e":10371,"s":10071,"text":"This code chunk looks a bit more intimidating, but it’s actually pretty straightforward. At the top we filter the dataset for the countries we’re interested in and save each in its own variable. The next series of code blocks create the plots that we are going to stack, one for each country we add."},{"code":null,"e":10739,"s":10371,"text":"Considering we want to compare countries, we will use a new column this time, one that lists the cumulative number of new cases over the past 14 days per 100,000 people in that country. Since the population of each country varies, using the number of cases per 100,000 people allows us to standardize the case counts based on the population. Let’s see how that looks."},{"code":null,"e":11188,"s":10739,"text":"The little red hump (in the beginning) is China, with Italy in green, Spain in yellow, and the US in blue. As you can see, China was hit first although the overall impact was significantly lower than the other three countries (based on the numbers they reported at least). Sadly, the US still looks in pretty bad shape comparatively. Of these four countries, Spain was hit hardest by the first wave while the US and Italy appear similarly impacted."},{"code":null,"e":11649,"s":11188,"text":"Unfortunately, the 2nd wave hit the US like a truck (in terms of cumulative cases), although we appear to be starting a downtrend (fingers crossed). Spain’s 2nd wave appears to still be ramping up. We’ll have to continue monitoring their data to see when they start flattening the curve. Italy, on the other hand, appears to have done an excellent job curbing the spread of COVID-19 after their first wave. There may be lessons we can learn from their success."},{"code":null,"e":12074,"s":11649,"text":"This is just the beginning. We’ve only scratched the surface of what you can do with this data. I hope you take what we covered here and run with it! Let it be a springboard for your own discovery. You no longer need to rely on snippets from the media, or other data scientists, to stay on top of the COVID-19 data and track trends. Let me know in the comments what you do with the data, and share with us what you discover!"},{"code":null,"e":12182,"s":12074,"text":"✍️ Subscribe to get my newest articles, featured in publications like The Startup & Towards Data Science ➡️"},{"code":null,"e":12330,"s":12182,"text":"[1] R, “R: The R Project for Statistical Computing” r-project.org, 2020. [Online]. Available: https://www.r-project.org. [Accessed: Aug. 10, 2020]."},{"code":null,"e":12472,"s":12330,"text":"[2] R Studio, “R Studio IDE Desktop” rstudio.com, 2020. [Online]. Available: https://rstudio.com/products/rstudio. [Accessed: Aug. 10, 2020]."},{"code":null,"e":12695,"s":12472,"text":"[3] lubridate package | R Documentation, “Make Dealing with Dates a Little Easier” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/lubridate/versions/1.7.9. [Accessed: Aug. 10, 2020]."},{"code":null,"e":12918,"s":12695,"text":"[4] tidyverse package | R Documentation, “Easily Install and Load the ‘Tidyverse’” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/tidyverse/versions/1.3.0. [Accessed: Aug. 10, 2020]."},{"code":null,"e":13124,"s":12918,"text":"[5] dplyr package | R Documentation, “A Grammar of Data Manipulation” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/dplyr/versions/0.7.8. [Accessed: Aug. 10, 2020]."},{"code":null,"e":13368,"s":13124,"text":"[6] ggplot2 package | R Documentation, “Create Elegant Data Visualisations Using the Grammar of Graphics” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/ggplot2/versions/3.3.2. [Accessed: Aug. 10, 2020]."}],"string":"[\n {\n \"code\": null,\n \"e\": 644,\n \"s\": 172,\n \"text\": \"When the global pandemic was first gaining momentum earlier this year, I relied on the media for updates, as most people do. I soon found that the media reports were not only inconsistent but often presented incomplete information. The fundamental issue with these limitations is that they frequently lead to misunderstandings and misinterpretations of the data, as evidenced by the widely divergent views on the severity of this global crisis at any given point in time.\"\n },\n {\n \"code\": null,\n \"e\": 1115,\n \"s\": 644,\n \"text\": \"Media organizations often cherry-pick the stats and graphs they report with a preference for sensational headlines that align with the story they are trying to tell. You can’t blame them too much for this tendency given their primary goal is to engage their audience. But if you’re hoping to get an accurate and reliable picture of the current impact of COVID-19 in your country, including tracking and charting trends, you can’t beat analyzing the source data yourself.\"\n },\n {\n \"code\": null,\n \"e\": 1570,\n \"s\": 1115,\n \"text\": \"In this article, we’re going to cover how to write a script in R to pull and analyze current coronavirus data. The best part is that once you’ve written the script, you can easily save and rerun it at will. You no longer need to rely on incomplete snapshots of the data based on what others decided to analyze and report. Now you’ll quickly and easily have the most recent COVID-19 data at your fingertips and can track current stats and trends yourself!\"\n },\n {\n \"code\": null,\n \"e\": 1923,\n \"s\": 1570,\n \"text\": \"I use R Studio and have this script saved in R markdown (Rmd). If you prefer to use base R, no worries, the code will work in a standard R script as well. I’m not going to cover how to install R [1] or R Studio [2], but both are open source (and free) and you can check the references below to find detailed documentation and installation instructions.\"\n },\n {\n \"code\": null,\n \"e\": 2469,\n \"s\": 1923,\n \"text\": \"First, open a new Rmd (or R) file. If you’re using Rmd, use Ctrl + Alt + I (Mac: Cmd + Option + I) to insert a code chunk. In this new chunk, we’ll load the packages we’re going to use. Note that you’ll first need to install these two packages if you haven’t already. I included the code below, to both install and load them, though you’ll need to uncomment the install lines (remove the hashtags) first if you need to install them. You only need to install packages once, so feel free to delete those commented lines once you’ve installed them.\"\n },\n {\n \"code\": null,\n \"e\": 2568,\n \"s\": 2469,\n \"text\": \"# install.packages('lubridate')# install.packages('tidyverse')library(lubridate)library(tidyverse)\"\n },\n {\n \"code\": null,\n \"e\": 2964,\n \"s\": 2568,\n \"text\": \"We won’t go into too much detail about what each package does because it’s beyond the scope of this article. I will, however, provide a brief description of them below and include links to their documentation in the references if you want to learn more. I highly recommend you take a closer look at the docs for these packages if you plan to use R regularly. They are awesome and worth learning!\"\n },\n {\n \"code\": null,\n \"e\": 3537,\n \"s\": 2964,\n \"text\": \"The first package is lubridate, which provides a lot of useful functions for working with date variables [3]. For our purposes, we’re just using lubridate to convert the date format provided in the dataset into a date variable recognized by R. The other package, tidyverse, is actually a group of packages that make up the “Tidyverse” [4]. When you install (and load) tidyverse, the entire group of included packages is automatically installed (and loaded) for you. Two of these included packages are worth mentioning because we’ll use them a few times, dplyr and ggplot2.\"\n },\n {\n \"code\": null,\n \"e\": 4139,\n \"s\": 3537,\n \"text\": \"The dplyr package “is a grammar of data manipulation, providing a consistent set of verbs that help you solve the most common data manipulation challenges” [5]. There could be a full book just covering how to use dplr, so that description will have to suffice for now. Ggplot2 is another amazing package used for plotting graphs and charts [6]. I took a full semester data visualization class in grad school that exclusively used ggplot2, and we only scratched the surface of available functionality. After installing and loading these two packages, we’re ready to dig in and start exploring the data!\"\n },\n {\n \"code\": null,\n \"e\": 4858,\n \"s\": 4139,\n \"text\": \"When first looking for available coronavirus datasets earlier this year, I checked out several different options. After a few weeks of testing these options out, I developed a preference for the European Center for Disease Control (ECDC) data [7]. It’s consistently updated (nightly) and offers most of the info I was looking for, including daily case counts, deaths, and population info, broken down by country/territory. Over the past 4–5 months, they’ve updated the dataset a few times changing the formatting and adding columns. But overall they’ve been remarkably consistent in maintaining and updating the data. Thus, the ECDC dataset has been my go-to source for COVID-19 tracking, and it’s what we’ll use here.\"\n },\n {\n \"code\": null,\n \"e\": 5137,\n \"s\": 4858,\n \"text\": \"# FROM: https://www.ecdc.europa.eudata <- read.csv(“https://opendata.ecdc.europa.eu/covid19/casedistribution/csv\\\", na.strings = “”, fileEncoding = “UTF-8-BOM”, stringsAsFactors = F)data# convert date formatdata$date_reported <- mdy(paste0(data$month,”-”,data$day,”-”,data$year))\"\n },\n {\n \"code\": null,\n \"e\": 5633,\n \"s\": 5137,\n \"text\": \"The ECDC provides a handy URL for their dataset in CSV format which we can easily pull into R using the built-in “read.csv” function. The nice thing about this link is that they update the dataset daily using the same URL so you never need to change the code to import the dataset. After running the read.csv line (Ctrl + Enter to run a single line or highlighted selection), the dataset is saved in the “data” variable. Run the next line that just says “data” to take a peek at the raw dataset.\"\n },\n {\n \"code\": null,\n \"e\": 5998,\n \"s\": 5633,\n \"text\": \"As you can see, the “dateRep” column formats the date in a rather unique manner using DD/MM/YYYY. The last line of code in the above chunk essentially just converts that date string into a format that R (and ggplot2) can read. There are numerous ways to accomplish this task but for simplicity, we’ll using the mdy() function included in the lubridate package [3].\"\n },\n {\n \"code\": null,\n \"e\": 6498,\n \"s\": 5998,\n \"text\": \"Let’s start by taking a look at the cumulative total cases and deaths worldwide. The first line will sum the total number of COVID-19 cases globally to date. Next, we’ll break it down by country and calculate the total number of cases in that country, as well as the maximum number of cases reported in a single day in each country. We then sort the results by total cases in descending order. Below that, we will calculate the same thing but for coronavirus deaths this time (instead of new cases).\"\n },\n {\n \"code\": null,\n \"e\": 6957,\n \"s\": 6498,\n \"text\": \"# total cases worldwide to datesum(data$cases)# total cases and max single day by countrydata %>% group_by(countriesAndTerritories) %>% summarise(cases_sum = sum(cases), cases_max = max(cases)) %>% arrange(desc(cases_sum))# total deaths worldwide to datesum(data$deaths)# total deaths and max single day by countrydata %>% group_by(countriesAndTerritories) %>% summarise(deaths_sum = sum(deaths), deaths_max = max(deaths)) %>% arrange(desc(deaths_sum))\"\n },\n {\n \"code\": null,\n \"e\": 7226,\n \"s\": 6957,\n \"text\": \"There will be four outputs from the above chunk of code. Below is the last output, showing the total COVID-19 deaths by country, as well as the max number of deaths in a single day. Currently (Aug. 2020), the US has the most coronavirus deaths by a fairly wide margin.\"\n },\n {\n \"code\": null,\n \"e\": 7413,\n \"s\": 7226,\n \"text\": \"Now we’ll start plotting the data to identify trends. Since I live in the US, I’m going to plot US cases. You can easily modify the code using other countries, which we’ll cover shortly.\"\n },\n {\n \"code\": null,\n \"e\": 7740,\n \"s\": 7413,\n \"text\": \"us <- data[data$countriesAndTerritories == ‘United_States_of_America’,]usUS_cases <- ggplot(us, aes(date_reported, as.numeric(cases))) + geom_col(fill = ‘blue’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)US_cases + labs(title=”Daily COVID-19 Cases in US”)\"\n },\n {\n \"code\": null,\n \"e\": 8017,\n \"s\": 7740,\n \"text\": \"First, I filter the dataset to just look at US cases and store that to a variable. I then use ggplot2 to plot the daily new cases of COVID-19. For more info on how to use ggplot2, check out their documentation [6]. After running the above chunk, you should see the below plot.\"\n },\n {\n \"code\": null,\n \"e\": 8150,\n \"s\": 8017,\n \"text\": \"Awesome, right? Of course, the picture painted by the data is not awesome, but the fact that you can track it yourself certainly is!\"\n },\n {\n \"code\": null,\n \"e\": 8297,\n \"s\": 8150,\n \"text\": \"Let’s move on and do the same thing for coronavirus deaths. The code is virtually the same except we are now tracking “deaths” instead of “cases.”\"\n },\n {\n \"code\": null,\n \"e\": 8557,\n \"s\": 8297,\n \"text\": \"US_deaths <- ggplot(us, aes(date_reported, as.numeric(deaths))) + geom_col(fill = ‘purple’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)US_deaths + labs(title=”Daily COVID-19 Deaths in US”)\"\n },\n {\n \"code\": null,\n \"e\": 8830,\n \"s\": 8557,\n \"text\": \"As you can see, the death rate paints a different picture than the case counts. While the 2nd wave of cases was twice the size of the 1st, the 2nd wave of deaths hasn’t eclipsed the 1st (yet). These are interesting trends to watch unfold and are definitely worth tracking.\"\n },\n {\n \"code\": null,\n \"e\": 9067,\n \"s\": 8830,\n \"text\": \"Hold onto your hats, because the last plot we’re going to cover will allow us to compare different countries! I chose the US, China, Italy, and Spain, but you can mix it up and choose whatever countries/territories you’re interested in.\"\n },\n {\n \"code\": null,\n \"e\": 10071,\n \"s\": 9067,\n \"text\": \"# Now lets add in a few more countrieschina <- data[data$countriesAndTerritories == ‘China’,]spain <- data[data$countriesAndTerritories == ‘Spain’,]italy <- data[data$countriesAndTerritories == ‘Italy’,]USplot <- ggplot(us, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000))) + geom_col(fill = ‘blue’, alpha = 0.6) + theme_minimal(base_size = 14) + xlab(NULL) + ylab(NULL) + scale_x_date(date_labels = “%Y/%m/%d”)China_US <- USplot + geom_col(data=china, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”red”, alpha = 0.5)Ch_US_Sp <- China_US + geom_col(data=spain, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”#E69F00\\\", alpha = 0.4)Chn_US_Sp_It <- Ch_US_Sp + geom_col(data=italy, aes(date_reported, as.numeric(Cumulative_number_for_14_days_of_COVID.19_cases_per_100000)), fill=”#009E73\\\", alpha = 0.9)Chn_US_Sp_It + labs(title=”China, US, Italy, & Spain”)\"\n },\n {\n \"code\": null,\n \"e\": 10371,\n \"s\": 10071,\n \"text\": \"This code chunk looks a bit more intimidating, but it’s actually pretty straightforward. At the top we filter the dataset for the countries we’re interested in and save each in its own variable. The next series of code blocks create the plots that we are going to stack, one for each country we add.\"\n },\n {\n \"code\": null,\n \"e\": 10739,\n \"s\": 10371,\n \"text\": \"Considering we want to compare countries, we will use a new column this time, one that lists the cumulative number of new cases over the past 14 days per 100,000 people in that country. Since the population of each country varies, using the number of cases per 100,000 people allows us to standardize the case counts based on the population. Let’s see how that looks.\"\n },\n {\n \"code\": null,\n \"e\": 11188,\n \"s\": 10739,\n \"text\": \"The little red hump (in the beginning) is China, with Italy in green, Spain in yellow, and the US in blue. As you can see, China was hit first although the overall impact was significantly lower than the other three countries (based on the numbers they reported at least). Sadly, the US still looks in pretty bad shape comparatively. Of these four countries, Spain was hit hardest by the first wave while the US and Italy appear similarly impacted.\"\n },\n {\n \"code\": null,\n \"e\": 11649,\n \"s\": 11188,\n \"text\": \"Unfortunately, the 2nd wave hit the US like a truck (in terms of cumulative cases), although we appear to be starting a downtrend (fingers crossed). Spain’s 2nd wave appears to still be ramping up. We’ll have to continue monitoring their data to see when they start flattening the curve. Italy, on the other hand, appears to have done an excellent job curbing the spread of COVID-19 after their first wave. There may be lessons we can learn from their success.\"\n },\n {\n \"code\": null,\n \"e\": 12074,\n \"s\": 11649,\n \"text\": \"This is just the beginning. We’ve only scratched the surface of what you can do with this data. I hope you take what we covered here and run with it! Let it be a springboard for your own discovery. You no longer need to rely on snippets from the media, or other data scientists, to stay on top of the COVID-19 data and track trends. Let me know in the comments what you do with the data, and share with us what you discover!\"\n },\n {\n \"code\": null,\n \"e\": 12182,\n \"s\": 12074,\n \"text\": \"✍️ Subscribe to get my newest articles, featured in publications like The Startup & Towards Data Science ➡️\"\n },\n {\n \"code\": null,\n \"e\": 12330,\n \"s\": 12182,\n \"text\": \"[1] R, “R: The R Project for Statistical Computing” r-project.org, 2020. [Online]. Available: https://www.r-project.org. [Accessed: Aug. 10, 2020].\"\n },\n {\n \"code\": null,\n \"e\": 12472,\n \"s\": 12330,\n \"text\": \"[2] R Studio, “R Studio IDE Desktop” rstudio.com, 2020. [Online]. Available: https://rstudio.com/products/rstudio. [Accessed: Aug. 10, 2020].\"\n },\n {\n \"code\": null,\n \"e\": 12695,\n \"s\": 12472,\n \"text\": \"[3] lubridate package | R Documentation, “Make Dealing with Dates a Little Easier” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/lubridate/versions/1.7.9. [Accessed: Aug. 10, 2020].\"\n },\n {\n \"code\": null,\n \"e\": 12918,\n \"s\": 12695,\n \"text\": \"[4] tidyverse package | R Documentation, “Easily Install and Load the ‘Tidyverse’” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/tidyverse/versions/1.3.0. [Accessed: Aug. 10, 2020].\"\n },\n {\n \"code\": null,\n \"e\": 13124,\n \"s\": 12918,\n \"text\": \"[5] dplyr package | R Documentation, “A Grammar of Data Manipulation” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/dplyr/versions/0.7.8. [Accessed: Aug. 10, 2020].\"\n },\n {\n \"code\": null,\n \"e\": 13368,\n \"s\": 13124,\n \"text\": \"[6] ggplot2 package | R Documentation, “Create Elegant Data Visualisations Using the Grammar of Graphics” rdocumentation.org, 2020. [Online]. Available: https://www.rdocumentation.org/packages/ggplot2/versions/3.3.2. [Accessed: Aug. 10, 2020].\"\n }\n]"}}},{"rowIdx":563,"cells":{"title":{"kind":"string","value":"Java String regionMatches() Method with Examples - GeeksforGeeks"},"text":{"kind":"string","value":"10 Dec, 2021\nThe regionMatches() method of the String class has two variants that can be used to test if two string regions are matching or equal. There are two variants of this method, i.e., one is case sensitive test method, and the other ignores the case-sensitive method.\nSyntax:\n1. Case sensitive test method:\npublic boolean regionMatches(int toffset, String other, int ooffset, int len)\n2. It has the option to consider or ignore the case method:\npublic boolean regionMatches(boolean ignoreCase, int toffset, String other, int ooffset, int len)\nParameters:\nignoreCase: if true, ignore the case when comparing characters.\ntoffset: the starting offset of the subregion in this string.\nother: the string argument being compared.\nooffset: the starting offset of the subregion in the string argument.\nlen: the number of characters to compare.\nReturn Value:\nA substring of the String object is compared to a substring of the argument other. The result is true if these substrings represent character sequences that are the same, ignoring case if and only if ignoreCase is true. The substring of this String object to be compared begins at index toffset and has length len. The substring of other to be compared begins at index ooffset and has length len. The result is false if and only if at least one of the following is true\nExample 1:\nJava\n// Java Program to find if substrings// or regions of two strings are equal import java.io.*; class CheckIfRegionsEqual { public static void main(String args[]) { // create three string objects String str1 = new String(\"Welcome to Geeksforgeeks.com\"); String str2 = new String(\"Geeksforgeeks\"); String str3 = new String(\"GEEKSFORGEEKS\"); // Comparing str1 and str2 System.out.print( \"Result of Comparing of String 1 and String 2: \"); System.out.println( str1.regionMatches(11, str2, 0, 13)); // Comparing str1 and str3 System.out.print( \"Result of Comparing of String 1 and String 3: \"); System.out.println( str1.regionMatches(11, str3, 0, 13)); // Comparing str2 and str3 System.out.print( \"Result of Comparing of String 2 and String 3: \"); System.out.println( str2.regionMatches(0, str3, 0, 13)); }}\nResult of Comparing of String 1 and String 2: true\nResult of Comparing of String 1 and String 3: false\nResult of Comparing of String 2 and String 3: false\nExample 2:\nJava\n// Java Program to find if substrings// or regions of two strings are equal import java.io.*; class CheckIfRegionsEqual { public static void main(String args[]) { // create three string objects String str1 = new String(\"Abhishek Rout\"); String str2 = new String(\"abhishek\"); String str3 = new String(\"ABHISHEK\"); // Comparing str1 and str2 substrings System.out.print( \"Result of comparing String 1 and String 2 : \"); System.out.println( str1.regionMatches(true, 0, str2, 0, 8)); // Comparing str1 and str3 substrings System.out.print( \"Result of comparing String 1 and String 3 : \"); System.out.println( str1.regionMatches(false, 0, str3, 0, 8)); // Comparing str2 and str3 substrings System.out.print( \"Result of comparing String 2 and String 3 : \"); System.out.println( str2.regionMatches(true, 0, str3, 0, 8)); }}\nResult of comparing String 1 and String 2 : true\nResult of comparing String 1 and String 3 : false\nResult of comparing String 2 and String 3 : true\nNote: The method returns false if at least one of these is true,\ntoffset is negative.\nooffset is negative.\ntoffset+len is greater than the length of this String object.\nooffset+len is greater than the length of the other argument.\nignoreCase is false, and there is some nonnegative integer k less than len such that:\n this.charAt(toffset+k) != other.charAt(ooffset+k)\nignoreCase is true, and there is some nonnegative integer k less than len such that:\n Character.toLowerCase(Character.toUpperCase(this.charAt(toffset+k))) != \n Character.toLowerCase(Character.toUpperCase(other.charAt(ooffset+k)))\nnishkarshgandhi\nJava-String-Programs\nJava-Strings\nJava\nJava Programs\nJava-Strings\nJava\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nComments\nOld Comments\nFunctional Interfaces in Java\nStream In Java\nConstructors in Java\nDifferent ways of Reading a text file in Java\nExceptions in Java\nConvert a String to Character array in Java\nJava Programming Examples\nConvert Double to Integer in Java\nImplementing a Linked List in Java using Class\nHow to Iterate HashMap in Java?"},"parsed":{"kind":"list like","value":[{"code":null,"e":23557,"s":23529,"text":"\n10 Dec, 2021"},{"code":null,"e":23820,"s":23557,"text":"The regionMatches() method of the String class has two variants that can be used to test if two string regions are matching or equal. There are two variants of this method, i.e., one is case sensitive test method, and the other ignores the case-sensitive method."},{"code":null,"e":23828,"s":23820,"text":"Syntax:"},{"code":null,"e":23859,"s":23828,"text":"1. Case sensitive test method:"},{"code":null,"e":23937,"s":23859,"text":"public boolean regionMatches(int toffset, String other, int ooffset, int len)"},{"code":null,"e":23997,"s":23937,"text":"2. It has the option to consider or ignore the case method:"},{"code":null,"e":24095,"s":23997,"text":"public boolean regionMatches(boolean ignoreCase, int toffset, String other, int ooffset, int len)"},{"code":null,"e":24107,"s":24095,"text":"Parameters:"},{"code":null,"e":24171,"s":24107,"text":"ignoreCase: if true, ignore the case when comparing characters."},{"code":null,"e":24233,"s":24171,"text":"toffset: the starting offset of the subregion in this string."},{"code":null,"e":24276,"s":24233,"text":"other: the string argument being compared."},{"code":null,"e":24346,"s":24276,"text":"ooffset: the starting offset of the subregion in the string argument."},{"code":null,"e":24388,"s":24346,"text":"len: the number of characters to compare."},{"code":null,"e":24402,"s":24388,"text":"Return Value:"},{"code":null,"e":24872,"s":24402,"text":"A substring of the String object is compared to a substring of the argument other. The result is true if these substrings represent character sequences that are the same, ignoring case if and only if ignoreCase is true. The substring of this String object to be compared begins at index toffset and has length len. The substring of other to be compared begins at index ooffset and has length len. The result is false if and only if at least one of the following is true"},{"code":null,"e":24883,"s":24872,"text":"Example 1:"},{"code":null,"e":24888,"s":24883,"text":"Java"},{"code":"// Java Program to find if substrings// or regions of two strings are equal import java.io.*; class CheckIfRegionsEqual { public static void main(String args[]) { // create three string objects String str1 = new String(\"Welcome to Geeksforgeeks.com\"); String str2 = new String(\"Geeksforgeeks\"); String str3 = new String(\"GEEKSFORGEEKS\"); // Comparing str1 and str2 System.out.print( \"Result of Comparing of String 1 and String 2: \"); System.out.println( str1.regionMatches(11, str2, 0, 13)); // Comparing str1 and str3 System.out.print( \"Result of Comparing of String 1 and String 3: \"); System.out.println( str1.regionMatches(11, str3, 0, 13)); // Comparing str2 and str3 System.out.print( \"Result of Comparing of String 2 and String 3: \"); System.out.println( str2.regionMatches(0, str3, 0, 13)); }}","e":25871,"s":24888,"text":null},{"code":null,"e":26026,"s":25871,"text":"Result of Comparing of String 1 and String 2: true\nResult of Comparing of String 1 and String 3: false\nResult of Comparing of String 2 and String 3: false"},{"code":null,"e":26037,"s":26026,"text":"Example 2:"},{"code":null,"e":26042,"s":26037,"text":"Java"},{"code":"// Java Program to find if substrings// or regions of two strings are equal import java.io.*; class CheckIfRegionsEqual { public static void main(String args[]) { // create three string objects String str1 = new String(\"Abhishek Rout\"); String str2 = new String(\"abhishek\"); String str3 = new String(\"ABHISHEK\"); // Comparing str1 and str2 substrings System.out.print( \"Result of comparing String 1 and String 2 : \"); System.out.println( str1.regionMatches(true, 0, str2, 0, 8)); // Comparing str1 and str3 substrings System.out.print( \"Result of comparing String 1 and String 3 : \"); System.out.println( str1.regionMatches(false, 0, str3, 0, 8)); // Comparing str2 and str3 substrings System.out.print( \"Result of comparing String 2 and String 3 : \"); System.out.println( str2.regionMatches(true, 0, str3, 0, 8)); }}","e":27030,"s":26042,"text":null},{"code":null,"e":27178,"s":27030,"text":"Result of comparing String 1 and String 2 : true\nResult of comparing String 1 and String 3 : false\nResult of comparing String 2 and String 3 : true"},{"code":null,"e":27243,"s":27178,"text":"Note: The method returns false if at least one of these is true,"},{"code":null,"e":27264,"s":27243,"text":"toffset is negative."},{"code":null,"e":27285,"s":27264,"text":"ooffset is negative."},{"code":null,"e":27347,"s":27285,"text":"toffset+len is greater than the length of this String object."},{"code":null,"e":27409,"s":27347,"text":"ooffset+len is greater than the length of the other argument."},{"code":null,"e":27495,"s":27409,"text":"ignoreCase is false, and there is some nonnegative integer k less than len such that:"},{"code":null,"e":27546,"s":27495,"text":" this.charAt(toffset+k) != other.charAt(ooffset+k)"},{"code":null,"e":27631,"s":27546,"text":"ignoreCase is true, and there is some nonnegative integer k less than len such that:"},{"code":null,"e":27780,"s":27631,"text":" Character.toLowerCase(Character.toUpperCase(this.charAt(toffset+k))) != \n Character.toLowerCase(Character.toUpperCase(other.charAt(ooffset+k)))"},{"code":null,"e":27796,"s":27780,"text":"nishkarshgandhi"},{"code":null,"e":27817,"s":27796,"text":"Java-String-Programs"},{"code":null,"e":27830,"s":27817,"text":"Java-Strings"},{"code":null,"e":27835,"s":27830,"text":"Java"},{"code":null,"e":27849,"s":27835,"text":"Java Programs"},{"code":null,"e":27862,"s":27849,"text":"Java-Strings"},{"code":null,"e":27867,"s":27862,"text":"Java"},{"code":null,"e":27965,"s":27867,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":27974,"s":27965,"text":"Comments"},{"code":null,"e":27987,"s":27974,"text":"Old Comments"},{"code":null,"e":28017,"s":27987,"text":"Functional Interfaces in Java"},{"code":null,"e":28032,"s":28017,"text":"Stream In Java"},{"code":null,"e":28053,"s":28032,"text":"Constructors in Java"},{"code":null,"e":28099,"s":28053,"text":"Different ways of Reading a text file in Java"},{"code":null,"e":28118,"s":28099,"text":"Exceptions in Java"},{"code":null,"e":28162,"s":28118,"text":"Convert a String to Character array in Java"},{"code":null,"e":28188,"s":28162,"text":"Java Programming Examples"},{"code":null,"e":28222,"s":28188,"text":"Convert Double to Integer in Java"},{"code":null,"e":28269,"s":28222,"text":"Implementing a Linked List in Java using Class"}],"string":"[\n {\n \"code\": null,\n \"e\": 23557,\n \"s\": 23529,\n \"text\": \"\\n10 Dec, 2021\"\n },\n {\n \"code\": null,\n \"e\": 23820,\n \"s\": 23557,\n \"text\": \"The regionMatches() method of the String class has two variants that can be used to test if two string regions are matching or equal. There are two variants of this method, i.e., one is case sensitive test method, and the other ignores the case-sensitive method.\"\n },\n {\n \"code\": null,\n \"e\": 23828,\n \"s\": 23820,\n \"text\": \"Syntax:\"\n },\n {\n \"code\": null,\n \"e\": 23859,\n \"s\": 23828,\n \"text\": \"1. Case sensitive test method:\"\n },\n {\n \"code\": null,\n \"e\": 23937,\n \"s\": 23859,\n \"text\": \"public boolean regionMatches(int toffset, String other, int ooffset, int len)\"\n },\n {\n \"code\": null,\n \"e\": 23997,\n \"s\": 23937,\n \"text\": \"2. It has the option to consider or ignore the case method:\"\n },\n {\n \"code\": null,\n \"e\": 24095,\n \"s\": 23997,\n \"text\": \"public boolean regionMatches(boolean ignoreCase, int toffset, String other, int ooffset, int len)\"\n },\n {\n \"code\": null,\n \"e\": 24107,\n \"s\": 24095,\n \"text\": \"Parameters:\"\n },\n {\n \"code\": null,\n \"e\": 24171,\n \"s\": 24107,\n \"text\": \"ignoreCase: if true, ignore the case when comparing characters.\"\n },\n {\n \"code\": null,\n \"e\": 24233,\n \"s\": 24171,\n \"text\": \"toffset: the starting offset of the subregion in this string.\"\n },\n {\n \"code\": null,\n \"e\": 24276,\n \"s\": 24233,\n \"text\": \"other: the string argument being compared.\"\n },\n {\n \"code\": null,\n \"e\": 24346,\n \"s\": 24276,\n \"text\": \"ooffset: the starting offset of the subregion in the string argument.\"\n },\n {\n \"code\": null,\n \"e\": 24388,\n \"s\": 24346,\n \"text\": \"len: the number of characters to compare.\"\n },\n {\n \"code\": null,\n \"e\": 24402,\n \"s\": 24388,\n \"text\": \"Return Value:\"\n },\n {\n \"code\": null,\n \"e\": 24872,\n \"s\": 24402,\n \"text\": \"A substring of the String object is compared to a substring of the argument other. The result is true if these substrings represent character sequences that are the same, ignoring case if and only if ignoreCase is true. The substring of this String object to be compared begins at index toffset and has length len. The substring of other to be compared begins at index ooffset and has length len. The result is false if and only if at least one of the following is true\"\n },\n {\n \"code\": null,\n \"e\": 24883,\n \"s\": 24872,\n \"text\": \"Example 1:\"\n },\n {\n \"code\": null,\n \"e\": 24888,\n \"s\": 24883,\n \"text\": \"Java\"\n },\n {\n \"code\": \"// Java Program to find if substrings// or regions of two strings are equal import java.io.*; class CheckIfRegionsEqual { public static void main(String args[]) { // create three string objects String str1 = new String(\\\"Welcome to Geeksforgeeks.com\\\"); String str2 = new String(\\\"Geeksforgeeks\\\"); String str3 = new String(\\\"GEEKSFORGEEKS\\\"); // Comparing str1 and str2 System.out.print( \\\"Result of Comparing of String 1 and String 2: \\\"); System.out.println( str1.regionMatches(11, str2, 0, 13)); // Comparing str1 and str3 System.out.print( \\\"Result of Comparing of String 1 and String 3: \\\"); System.out.println( str1.regionMatches(11, str3, 0, 13)); // Comparing str2 and str3 System.out.print( \\\"Result of Comparing of String 2 and String 3: \\\"); System.out.println( str2.regionMatches(0, str3, 0, 13)); }}\",\n \"e\": 25871,\n \"s\": 24888,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26026,\n \"s\": 25871,\n \"text\": \"Result of Comparing of String 1 and String 2: true\\nResult of Comparing of String 1 and String 3: false\\nResult of Comparing of String 2 and String 3: false\"\n },\n {\n \"code\": null,\n \"e\": 26037,\n \"s\": 26026,\n \"text\": \"Example 2:\"\n },\n {\n \"code\": null,\n \"e\": 26042,\n \"s\": 26037,\n \"text\": \"Java\"\n },\n {\n \"code\": \"// Java Program to find if substrings// or regions of two strings are equal import java.io.*; class CheckIfRegionsEqual { public static void main(String args[]) { // create three string objects String str1 = new String(\\\"Abhishek Rout\\\"); String str2 = new String(\\\"abhishek\\\"); String str3 = new String(\\\"ABHISHEK\\\"); // Comparing str1 and str2 substrings System.out.print( \\\"Result of comparing String 1 and String 2 : \\\"); System.out.println( str1.regionMatches(true, 0, str2, 0, 8)); // Comparing str1 and str3 substrings System.out.print( \\\"Result of comparing String 1 and String 3 : \\\"); System.out.println( str1.regionMatches(false, 0, str3, 0, 8)); // Comparing str2 and str3 substrings System.out.print( \\\"Result of comparing String 2 and String 3 : \\\"); System.out.println( str2.regionMatches(true, 0, str3, 0, 8)); }}\",\n \"e\": 27030,\n \"s\": 26042,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27178,\n \"s\": 27030,\n \"text\": \"Result of comparing String 1 and String 2 : true\\nResult of comparing String 1 and String 3 : false\\nResult of comparing String 2 and String 3 : true\"\n },\n {\n \"code\": null,\n \"e\": 27243,\n \"s\": 27178,\n \"text\": \"Note: The method returns false if at least one of these is true,\"\n },\n {\n \"code\": null,\n \"e\": 27264,\n \"s\": 27243,\n \"text\": \"toffset is negative.\"\n },\n {\n \"code\": null,\n \"e\": 27285,\n \"s\": 27264,\n \"text\": \"ooffset is negative.\"\n },\n {\n \"code\": null,\n \"e\": 27347,\n \"s\": 27285,\n \"text\": \"toffset+len is greater than the length of this String object.\"\n },\n {\n \"code\": null,\n \"e\": 27409,\n \"s\": 27347,\n \"text\": \"ooffset+len is greater than the length of the other argument.\"\n },\n {\n \"code\": null,\n \"e\": 27495,\n \"s\": 27409,\n \"text\": \"ignoreCase is false, and there is some nonnegative integer k less than len such that:\"\n },\n {\n \"code\": null,\n \"e\": 27546,\n \"s\": 27495,\n \"text\": \" this.charAt(toffset+k) != other.charAt(ooffset+k)\"\n },\n {\n \"code\": null,\n \"e\": 27631,\n \"s\": 27546,\n \"text\": \"ignoreCase is true, and there is some nonnegative integer k less than len such that:\"\n },\n {\n \"code\": null,\n \"e\": 27780,\n \"s\": 27631,\n \"text\": \" Character.toLowerCase(Character.toUpperCase(this.charAt(toffset+k))) != \\n Character.toLowerCase(Character.toUpperCase(other.charAt(ooffset+k)))\"\n },\n {\n \"code\": null,\n \"e\": 27796,\n \"s\": 27780,\n \"text\": \"nishkarshgandhi\"\n },\n {\n \"code\": null,\n \"e\": 27817,\n \"s\": 27796,\n \"text\": \"Java-String-Programs\"\n },\n {\n \"code\": null,\n \"e\": 27830,\n \"s\": 27817,\n \"text\": \"Java-Strings\"\n },\n {\n \"code\": null,\n \"e\": 27835,\n \"s\": 27830,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 27849,\n \"s\": 27835,\n \"text\": \"Java Programs\"\n },\n {\n \"code\": null,\n \"e\": 27862,\n \"s\": 27849,\n \"text\": \"Java-Strings\"\n },\n {\n \"code\": null,\n \"e\": 27867,\n \"s\": 27862,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 27965,\n \"s\": 27867,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 27974,\n \"s\": 27965,\n \"text\": \"Comments\"\n },\n {\n \"code\": null,\n \"e\": 27987,\n \"s\": 27974,\n \"text\": \"Old Comments\"\n },\n {\n \"code\": null,\n \"e\": 28017,\n \"s\": 27987,\n \"text\": \"Functional Interfaces in Java\"\n },\n {\n \"code\": null,\n \"e\": 28032,\n \"s\": 28017,\n \"text\": \"Stream In Java\"\n },\n {\n \"code\": null,\n \"e\": 28053,\n \"s\": 28032,\n \"text\": \"Constructors in Java\"\n },\n {\n \"code\": null,\n \"e\": 28099,\n \"s\": 28053,\n \"text\": \"Different ways of Reading a text file in Java\"\n },\n {\n \"code\": null,\n \"e\": 28118,\n \"s\": 28099,\n \"text\": \"Exceptions in Java\"\n },\n {\n \"code\": null,\n \"e\": 28162,\n \"s\": 28118,\n \"text\": \"Convert a String to Character array in Java\"\n },\n {\n \"code\": null,\n \"e\": 28188,\n \"s\": 28162,\n \"text\": \"Java Programming Examples\"\n },\n {\n \"code\": null,\n \"e\": 28222,\n \"s\": 28188,\n \"text\": \"Convert Double to Integer in Java\"\n },\n {\n \"code\": null,\n \"e\": 28269,\n \"s\": 28222,\n \"text\": \"Implementing a Linked List in Java using Class\"\n }\n]"}}},{"rowIdx":564,"cells":{"title":{"kind":"string","value":"Variational Autoencoder Demystified With PyTorch Implementation. | by William Falcon | Towards Data Science"},"text":{"kind":"string","value":"It’s likely that you’ve searched for VAE tutorials but have come away empty-handed. Either the tutorial uses MNIST instead of color images or the concepts are conflated and not explained clearly.\nYou’re in luck!\nThis tutorial covers all aspects of VAEs including the matching math and implementation on a realistic dataset of color images.\nThe outline is as follows:\nResources (github code, colab).ELBO definition (optional).ELBO, KL divergence explanation (optional).ELBO, reconstruction loss explanation (optional).PyTorch implementation\nResources (github code, colab).\nELBO definition (optional).\nELBO, KL divergence explanation (optional).\nELBO, reconstruction loss explanation (optional).\nPyTorch implementation\nFollow along with this colab.\nCode is also available on Github here (don’t forget to star!).\nFor a production/research-ready implementation simply install pytorch-lightning-bolts\npip install pytorch-lightning-bolts\nand import and use/subclass\nfrom pl_bolts.models.autoencoders import VAEmodel = VAE()trainer = Trainer()trainer.fit(model)\nIn this section, we’ll discuss the VAE loss. If you don’t care for the math, feel free to skip this section!\nDistributions: First, let’s define a few things. Let p define a probability distribution. Let q define a probability distribution as well. These distributions could be any distribution you want like Normal, etc... In this tutorial, we don’t specify what these are to keep things easier to understand.\nSo, when you see p, or q, just think of a blackbox that is a distribution. Don’t worry about what is in there.\nVAE loss: The loss function for the VAE is called the ELBO. The ELBO looks like this:\nThe first term is the KL divergence. The second term is the reconstruction term.\nConfusion point 1 MSE: Most tutorials equate reconstruction with MSE. But this is misleading because MSE only works when you use certain distributions for p, q.\nConfusion point 2 KL divergence: Most other tutorials use p, q that are normal. If you assume p, q are Normal distributions, the KL term looks like this (in code):\nkl = torch.mean(-0.5 * torch.sum(1 + log_var - mu ** 2 - log_var.exp(), dim = 1), dim = 0)\nBut in our equation, we DO NOT assume these are normal. We do this because it makes things much easier to understand and keeps the implementation general so you can use any distribution you want.\nLet’s break down each component of the loss to understand what each is doing.\nLet’s first look at the KL divergence term.\nThe first part (min) says that we want to minimize this. Next to that, the E term stands for expectation under q. This means we draw a sample (z) from the q distribution.\nNotice that in this case, I used a Normal(0, 1) distribution for q. When we code the loss, we have to specify the distributions we want to use.\nNow that we have a sample, the next parts of the formula ask for two things: 1) the log probability of z under the q distribution, 2) the log probability of z under the p distribution.\nNotice that z has almost zero probability of having come from p. But has 6% probability of having come from q. If we visualize this it’s clear why:\nz has a value of 6.0110. If you look at the area of q where z is (ie: the probability), it’s clear that there is a non-zero chance it came from q. But, if you look at p, there’s basically a zero chance that it came from p.\nIf we look back at this part of the loss\nYou can see that we are minimizing the difference between these probabilities.\nSo, to maximize the probability of z under p, we have to shift q closer to p, so that when we sample a new z from q, that value will have a much higher probability.\nLet’s verify this via code\nand now our new kl divergence is:\nNow, this z has a single dimension. But in the real world, we care about n-dimensional zs. To handle this in the implementation, we simply sum over the last dimension. The trick here is that when sampling from a univariate distribution (in this case Normal), if you sum across many of these distributions, it’s equivalent to using an n-dimensional distribution (n-dimensional Normal in this case).\nHere’s the kl divergence that is distribution agnostic in PyTorch.\nThis generic form of the KL is called the monte-carlo approximation. This means we sample z many times and estimate the KL divergence. (in practice, these estimates are really good and with a batch size of 128 or more, the estimate is very accurate).\nThe second term we’ll look at is the reconstruction term.\nIn the KL explanation we used p(z), q(z|x). For this equation, we need to define a third distribution, P_rec(x|z). To avoid confusion we’ll use P_rec to differentiate.\nTip: DO NOT confuse P_rec(x|z) and p(z).\nSo, in this equation we again sample z from q. But now we use that z to calculate the probability of seeing the input x (ie: a color image in this case) given the z that we sampled.\nFirst we need to think of our images as having a distribution in image space. Imagine a very high dimensional distribution. For a color image that is 32x32 pixels, that means this distribution has (3x32x32 = 3072) dimensions.\nSo, now we need a way to map the z vector (which is low dimensional) back into a super high dimensional distribution from which we can measure the probability of seeing this particular image. In VAEs, we use a decoder for that.\nConfusion point 3: Most tutorials show x_hat as an image. However, this is wrong. x_hat IS NOT an image. These are PARAMETERS for a distribution. But because these tutorials use MNIST, the output is already in the zero-one range and can be interpreted as an image. But with color images, this is not true.\nTo finalize the calculation of this formula, we use x_hat to parametrize a likelihood distribution (in this case a normal again) so that we can measure the probability of the input (image) under this high dimensional distribution.\nie: we are asking the same question: Given P_rec(x|z) and this image, what is the probability?\nSince the reconstruction term has a negative sign in front of it, we minimize it by maximizing the probability of this image under P_rec(x|z).\nSome things may not be obvious still from this explanation. First, each image will end up with its own q. The KL term will push all the qs towards the same p (called the prior). But if all the qs, collapse to p, then the network can cheat by just mapping everything to zero and thus the VAE will collapse.\nThe reconstruction term, forces each q to be unique and spread out so that the image can be reconstructed correctly. This keeps all the qs from collapsing onto each other.\nAs you can see, both terms provide a nice balance to each other. This is also why you may experience instability in training VAEs!\nNow that you understand the intuition behind the approach and math, let’s code up the VAE in PyTorch. For this implementation, I’ll use PyTorch Lightning which will keep the code short but still scalable.\nIf you skipped the earlier sections, recall that we are now going to implement the following VAE loss:\nThis equation has 3 distributions. Our code will be agnostic to the distributions, but we’ll use Normal for all of them.\nThe first distribution: q(z|x) needs parameters which we generate via an encoder.\nThe second distribution: p(z) is the prior which we will fix to a specific location (0,1). By fixing this distribution, the KL divergence term will force q(z|x) to move closer to p by updating the parameters.\nThe optimization start out with two distributions like this (q, p).\nand over time, moves q closer to p (p is fixed as you saw, and q has learnable parameters).\nThe third distribution: p(x|z) (usually called the reconstruction), will be used to measure the probability of seeing the image (input) given the z that was sampled.\nThink about this image as having 3072 dimensions (3 channels x 32 pixels x 32 pixels).\nSo, we can now write a full class that implements this algorithm.\nWhat’s nice about Lightning is that all the hard logic is encapsulated in the training_step. This means everyone can know exactly what something is doing when it is written in Lightning by looking at the training_step.\nData: The Lightning VAE is fully decoupled from the data! This means we can train on imagenet, or whatever you want. For speed and cost purposes, I’ll use cifar-10 (a much smaller image dataset).\nLightning uses regular pytorch dataloaders. But it’s annoying to have to figure out transforms, and other settings to get the data in usable shape. For this, we’ll use the optional abstraction (Datamodule) which abstracts all this complexity from me.\nNow that we have the VAE and the data, we can train it on as many GPUs as I want. In this case, colab gives us just 1, so we’ll use that.\nAnd we’ll see training start...\nEven just after 18 epochs, I can look at the reconstruction.\nEven though we didn’t train for long, and used no fancy tricks like perceptual losses, we get something that kind of looks like samples from CIFAR-10.\nIn the next post, I’ll cover the derivation of the ELBO!\nRemember to star the repo and share if this was useful"},"parsed":{"kind":"list like","value":[{"code":null,"e":367,"s":171,"text":"It’s likely that you’ve searched for VAE tutorials but have come away empty-handed. Either the tutorial uses MNIST instead of color images or the concepts are conflated and not explained clearly."},{"code":null,"e":383,"s":367,"text":"You’re in luck!"},{"code":null,"e":511,"s":383,"text":"This tutorial covers all aspects of VAEs including the matching math and implementation on a realistic dataset of color images."},{"code":null,"e":538,"s":511,"text":"The outline is as follows:"},{"code":null,"e":711,"s":538,"text":"Resources (github code, colab).ELBO definition (optional).ELBO, KL divergence explanation (optional).ELBO, reconstruction loss explanation (optional).PyTorch implementation"},{"code":null,"e":743,"s":711,"text":"Resources (github code, colab)."},{"code":null,"e":771,"s":743,"text":"ELBO definition (optional)."},{"code":null,"e":815,"s":771,"text":"ELBO, KL divergence explanation (optional)."},{"code":null,"e":865,"s":815,"text":"ELBO, reconstruction loss explanation (optional)."},{"code":null,"e":888,"s":865,"text":"PyTorch implementation"},{"code":null,"e":918,"s":888,"text":"Follow along with this colab."},{"code":null,"e":981,"s":918,"text":"Code is also available on Github here (don’t forget to star!)."},{"code":null,"e":1067,"s":981,"text":"For a production/research-ready implementation simply install pytorch-lightning-bolts"},{"code":null,"e":1103,"s":1067,"text":"pip install pytorch-lightning-bolts"},{"code":null,"e":1131,"s":1103,"text":"and import and use/subclass"},{"code":null,"e":1226,"s":1131,"text":"from pl_bolts.models.autoencoders import VAEmodel = VAE()trainer = Trainer()trainer.fit(model)"},{"code":null,"e":1335,"s":1226,"text":"In this section, we’ll discuss the VAE loss. If you don’t care for the math, feel free to skip this section!"},{"code":null,"e":1636,"s":1335,"text":"Distributions: First, let’s define a few things. Let p define a probability distribution. Let q define a probability distribution as well. These distributions could be any distribution you want like Normal, etc... In this tutorial, we don’t specify what these are to keep things easier to understand."},{"code":null,"e":1747,"s":1636,"text":"So, when you see p, or q, just think of a blackbox that is a distribution. Don’t worry about what is in there."},{"code":null,"e":1833,"s":1747,"text":"VAE loss: The loss function for the VAE is called the ELBO. The ELBO looks like this:"},{"code":null,"e":1914,"s":1833,"text":"The first term is the KL divergence. The second term is the reconstruction term."},{"code":null,"e":2075,"s":1914,"text":"Confusion point 1 MSE: Most tutorials equate reconstruction with MSE. But this is misleading because MSE only works when you use certain distributions for p, q."},{"code":null,"e":2239,"s":2075,"text":"Confusion point 2 KL divergence: Most other tutorials use p, q that are normal. If you assume p, q are Normal distributions, the KL term looks like this (in code):"},{"code":null,"e":2330,"s":2239,"text":"kl = torch.mean(-0.5 * torch.sum(1 + log_var - mu ** 2 - log_var.exp(), dim = 1), dim = 0)"},{"code":null,"e":2526,"s":2330,"text":"But in our equation, we DO NOT assume these are normal. We do this because it makes things much easier to understand and keeps the implementation general so you can use any distribution you want."},{"code":null,"e":2604,"s":2526,"text":"Let’s break down each component of the loss to understand what each is doing."},{"code":null,"e":2648,"s":2604,"text":"Let’s first look at the KL divergence term."},{"code":null,"e":2819,"s":2648,"text":"The first part (min) says that we want to minimize this. Next to that, the E term stands for expectation under q. This means we draw a sample (z) from the q distribution."},{"code":null,"e":2963,"s":2819,"text":"Notice that in this case, I used a Normal(0, 1) distribution for q. When we code the loss, we have to specify the distributions we want to use."},{"code":null,"e":3148,"s":2963,"text":"Now that we have a sample, the next parts of the formula ask for two things: 1) the log probability of z under the q distribution, 2) the log probability of z under the p distribution."},{"code":null,"e":3296,"s":3148,"text":"Notice that z has almost zero probability of having come from p. But has 6% probability of having come from q. If we visualize this it’s clear why:"},{"code":null,"e":3519,"s":3296,"text":"z has a value of 6.0110. If you look at the area of q where z is (ie: the probability), it’s clear that there is a non-zero chance it came from q. But, if you look at p, there’s basically a zero chance that it came from p."},{"code":null,"e":3560,"s":3519,"text":"If we look back at this part of the loss"},{"code":null,"e":3639,"s":3560,"text":"You can see that we are minimizing the difference between these probabilities."},{"code":null,"e":3804,"s":3639,"text":"So, to maximize the probability of z under p, we have to shift q closer to p, so that when we sample a new z from q, that value will have a much higher probability."},{"code":null,"e":3831,"s":3804,"text":"Let’s verify this via code"},{"code":null,"e":3865,"s":3831,"text":"and now our new kl divergence is:"},{"code":null,"e":4263,"s":3865,"text":"Now, this z has a single dimension. But in the real world, we care about n-dimensional zs. To handle this in the implementation, we simply sum over the last dimension. The trick here is that when sampling from a univariate distribution (in this case Normal), if you sum across many of these distributions, it’s equivalent to using an n-dimensional distribution (n-dimensional Normal in this case)."},{"code":null,"e":4330,"s":4263,"text":"Here’s the kl divergence that is distribution agnostic in PyTorch."},{"code":null,"e":4581,"s":4330,"text":"This generic form of the KL is called the monte-carlo approximation. This means we sample z many times and estimate the KL divergence. (in practice, these estimates are really good and with a batch size of 128 or more, the estimate is very accurate)."},{"code":null,"e":4639,"s":4581,"text":"The second term we’ll look at is the reconstruction term."},{"code":null,"e":4807,"s":4639,"text":"In the KL explanation we used p(z), q(z|x). For this equation, we need to define a third distribution, P_rec(x|z). To avoid confusion we’ll use P_rec to differentiate."},{"code":null,"e":4848,"s":4807,"text":"Tip: DO NOT confuse P_rec(x|z) and p(z)."},{"code":null,"e":5030,"s":4848,"text":"So, in this equation we again sample z from q. But now we use that z to calculate the probability of seeing the input x (ie: a color image in this case) given the z that we sampled."},{"code":null,"e":5256,"s":5030,"text":"First we need to think of our images as having a distribution in image space. Imagine a very high dimensional distribution. For a color image that is 32x32 pixels, that means this distribution has (3x32x32 = 3072) dimensions."},{"code":null,"e":5484,"s":5256,"text":"So, now we need a way to map the z vector (which is low dimensional) back into a super high dimensional distribution from which we can measure the probability of seeing this particular image. In VAEs, we use a decoder for that."},{"code":null,"e":5790,"s":5484,"text":"Confusion point 3: Most tutorials show x_hat as an image. However, this is wrong. x_hat IS NOT an image. These are PARAMETERS for a distribution. But because these tutorials use MNIST, the output is already in the zero-one range and can be interpreted as an image. But with color images, this is not true."},{"code":null,"e":6021,"s":5790,"text":"To finalize the calculation of this formula, we use x_hat to parametrize a likelihood distribution (in this case a normal again) so that we can measure the probability of the input (image) under this high dimensional distribution."},{"code":null,"e":6116,"s":6021,"text":"ie: we are asking the same question: Given P_rec(x|z) and this image, what is the probability?"},{"code":null,"e":6259,"s":6116,"text":"Since the reconstruction term has a negative sign in front of it, we minimize it by maximizing the probability of this image under P_rec(x|z)."},{"code":null,"e":6565,"s":6259,"text":"Some things may not be obvious still from this explanation. First, each image will end up with its own q. The KL term will push all the qs towards the same p (called the prior). But if all the qs, collapse to p, then the network can cheat by just mapping everything to zero and thus the VAE will collapse."},{"code":null,"e":6737,"s":6565,"text":"The reconstruction term, forces each q to be unique and spread out so that the image can be reconstructed correctly. This keeps all the qs from collapsing onto each other."},{"code":null,"e":6868,"s":6737,"text":"As you can see, both terms provide a nice balance to each other. This is also why you may experience instability in training VAEs!"},{"code":null,"e":7073,"s":6868,"text":"Now that you understand the intuition behind the approach and math, let’s code up the VAE in PyTorch. For this implementation, I’ll use PyTorch Lightning which will keep the code short but still scalable."},{"code":null,"e":7176,"s":7073,"text":"If you skipped the earlier sections, recall that we are now going to implement the following VAE loss:"},{"code":null,"e":7297,"s":7176,"text":"This equation has 3 distributions. Our code will be agnostic to the distributions, but we’ll use Normal for all of them."},{"code":null,"e":7379,"s":7297,"text":"The first distribution: q(z|x) needs parameters which we generate via an encoder."},{"code":null,"e":7588,"s":7379,"text":"The second distribution: p(z) is the prior which we will fix to a specific location (0,1). By fixing this distribution, the KL divergence term will force q(z|x) to move closer to p by updating the parameters."},{"code":null,"e":7656,"s":7588,"text":"The optimization start out with two distributions like this (q, p)."},{"code":null,"e":7748,"s":7656,"text":"and over time, moves q closer to p (p is fixed as you saw, and q has learnable parameters)."},{"code":null,"e":7914,"s":7748,"text":"The third distribution: p(x|z) (usually called the reconstruction), will be used to measure the probability of seeing the image (input) given the z that was sampled."},{"code":null,"e":8001,"s":7914,"text":"Think about this image as having 3072 dimensions (3 channels x 32 pixels x 32 pixels)."},{"code":null,"e":8067,"s":8001,"text":"So, we can now write a full class that implements this algorithm."},{"code":null,"e":8286,"s":8067,"text":"What’s nice about Lightning is that all the hard logic is encapsulated in the training_step. This means everyone can know exactly what something is doing when it is written in Lightning by looking at the training_step."},{"code":null,"e":8482,"s":8286,"text":"Data: The Lightning VAE is fully decoupled from the data! This means we can train on imagenet, or whatever you want. For speed and cost purposes, I’ll use cifar-10 (a much smaller image dataset)."},{"code":null,"e":8733,"s":8482,"text":"Lightning uses regular pytorch dataloaders. But it’s annoying to have to figure out transforms, and other settings to get the data in usable shape. For this, we’ll use the optional abstraction (Datamodule) which abstracts all this complexity from me."},{"code":null,"e":8871,"s":8733,"text":"Now that we have the VAE and the data, we can train it on as many GPUs as I want. In this case, colab gives us just 1, so we’ll use that."},{"code":null,"e":8903,"s":8871,"text":"And we’ll see training start..."},{"code":null,"e":8964,"s":8903,"text":"Even just after 18 epochs, I can look at the reconstruction."},{"code":null,"e":9115,"s":8964,"text":"Even though we didn’t train for long, and used no fancy tricks like perceptual losses, we get something that kind of looks like samples from CIFAR-10."},{"code":null,"e":9172,"s":9115,"text":"In the next post, I’ll cover the derivation of the ELBO!"}],"string":"[\n {\n \"code\": null,\n \"e\": 367,\n \"s\": 171,\n \"text\": \"It’s likely that you’ve searched for VAE tutorials but have come away empty-handed. Either the tutorial uses MNIST instead of color images or the concepts are conflated and not explained clearly.\"\n },\n {\n \"code\": null,\n \"e\": 383,\n \"s\": 367,\n \"text\": \"You’re in luck!\"\n },\n {\n \"code\": null,\n \"e\": 511,\n \"s\": 383,\n \"text\": \"This tutorial covers all aspects of VAEs including the matching math and implementation on a realistic dataset of color images.\"\n },\n {\n \"code\": null,\n \"e\": 538,\n \"s\": 511,\n \"text\": \"The outline is as follows:\"\n },\n {\n \"code\": null,\n \"e\": 711,\n \"s\": 538,\n \"text\": \"Resources (github code, colab).ELBO definition (optional).ELBO, KL divergence explanation (optional).ELBO, reconstruction loss explanation (optional).PyTorch implementation\"\n },\n {\n \"code\": null,\n \"e\": 743,\n \"s\": 711,\n \"text\": \"Resources (github code, colab).\"\n },\n {\n \"code\": null,\n \"e\": 771,\n \"s\": 743,\n \"text\": \"ELBO definition (optional).\"\n },\n {\n \"code\": null,\n \"e\": 815,\n \"s\": 771,\n \"text\": \"ELBO, KL divergence explanation (optional).\"\n },\n {\n \"code\": null,\n \"e\": 865,\n \"s\": 815,\n \"text\": \"ELBO, reconstruction loss explanation (optional).\"\n },\n {\n \"code\": null,\n \"e\": 888,\n \"s\": 865,\n \"text\": \"PyTorch implementation\"\n },\n {\n \"code\": null,\n \"e\": 918,\n \"s\": 888,\n \"text\": \"Follow along with this colab.\"\n },\n {\n \"code\": null,\n \"e\": 981,\n \"s\": 918,\n \"text\": \"Code is also available on Github here (don’t forget to star!).\"\n },\n {\n \"code\": null,\n \"e\": 1067,\n \"s\": 981,\n \"text\": \"For a production/research-ready implementation simply install pytorch-lightning-bolts\"\n },\n {\n \"code\": null,\n \"e\": 1103,\n \"s\": 1067,\n \"text\": \"pip install pytorch-lightning-bolts\"\n },\n {\n \"code\": null,\n \"e\": 1131,\n \"s\": 1103,\n \"text\": \"and import and use/subclass\"\n },\n {\n \"code\": null,\n \"e\": 1226,\n \"s\": 1131,\n \"text\": \"from pl_bolts.models.autoencoders import VAEmodel = VAE()trainer = Trainer()trainer.fit(model)\"\n },\n {\n \"code\": null,\n \"e\": 1335,\n \"s\": 1226,\n \"text\": \"In this section, we’ll discuss the VAE loss. If you don’t care for the math, feel free to skip this section!\"\n },\n {\n \"code\": null,\n \"e\": 1636,\n \"s\": 1335,\n \"text\": \"Distributions: First, let’s define a few things. Let p define a probability distribution. Let q define a probability distribution as well. These distributions could be any distribution you want like Normal, etc... In this tutorial, we don’t specify what these are to keep things easier to understand.\"\n },\n {\n \"code\": null,\n \"e\": 1747,\n \"s\": 1636,\n \"text\": \"So, when you see p, or q, just think of a blackbox that is a distribution. Don’t worry about what is in there.\"\n },\n {\n \"code\": null,\n \"e\": 1833,\n \"s\": 1747,\n \"text\": \"VAE loss: The loss function for the VAE is called the ELBO. The ELBO looks like this:\"\n },\n {\n \"code\": null,\n \"e\": 1914,\n \"s\": 1833,\n \"text\": \"The first term is the KL divergence. The second term is the reconstruction term.\"\n },\n {\n \"code\": null,\n \"e\": 2075,\n \"s\": 1914,\n \"text\": \"Confusion point 1 MSE: Most tutorials equate reconstruction with MSE. But this is misleading because MSE only works when you use certain distributions for p, q.\"\n },\n {\n \"code\": null,\n \"e\": 2239,\n \"s\": 2075,\n \"text\": \"Confusion point 2 KL divergence: Most other tutorials use p, q that are normal. If you assume p, q are Normal distributions, the KL term looks like this (in code):\"\n },\n {\n \"code\": null,\n \"e\": 2330,\n \"s\": 2239,\n \"text\": \"kl = torch.mean(-0.5 * torch.sum(1 + log_var - mu ** 2 - log_var.exp(), dim = 1), dim = 0)\"\n },\n {\n \"code\": null,\n \"e\": 2526,\n \"s\": 2330,\n \"text\": \"But in our equation, we DO NOT assume these are normal. We do this because it makes things much easier to understand and keeps the implementation general so you can use any distribution you want.\"\n },\n {\n \"code\": null,\n \"e\": 2604,\n \"s\": 2526,\n \"text\": \"Let’s break down each component of the loss to understand what each is doing.\"\n },\n {\n \"code\": null,\n \"e\": 2648,\n \"s\": 2604,\n \"text\": \"Let’s first look at the KL divergence term.\"\n },\n {\n \"code\": null,\n \"e\": 2819,\n \"s\": 2648,\n \"text\": \"The first part (min) says that we want to minimize this. Next to that, the E term stands for expectation under q. This means we draw a sample (z) from the q distribution.\"\n },\n {\n \"code\": null,\n \"e\": 2963,\n \"s\": 2819,\n \"text\": \"Notice that in this case, I used a Normal(0, 1) distribution for q. When we code the loss, we have to specify the distributions we want to use.\"\n },\n {\n \"code\": null,\n \"e\": 3148,\n \"s\": 2963,\n \"text\": \"Now that we have a sample, the next parts of the formula ask for two things: 1) the log probability of z under the q distribution, 2) the log probability of z under the p distribution.\"\n },\n {\n \"code\": null,\n \"e\": 3296,\n \"s\": 3148,\n \"text\": \"Notice that z has almost zero probability of having come from p. But has 6% probability of having come from q. If we visualize this it’s clear why:\"\n },\n {\n \"code\": null,\n \"e\": 3519,\n \"s\": 3296,\n \"text\": \"z has a value of 6.0110. If you look at the area of q where z is (ie: the probability), it’s clear that there is a non-zero chance it came from q. But, if you look at p, there’s basically a zero chance that it came from p.\"\n },\n {\n \"code\": null,\n \"e\": 3560,\n \"s\": 3519,\n \"text\": \"If we look back at this part of the loss\"\n },\n {\n \"code\": null,\n \"e\": 3639,\n \"s\": 3560,\n \"text\": \"You can see that we are minimizing the difference between these probabilities.\"\n },\n {\n \"code\": null,\n \"e\": 3804,\n \"s\": 3639,\n \"text\": \"So, to maximize the probability of z under p, we have to shift q closer to p, so that when we sample a new z from q, that value will have a much higher probability.\"\n },\n {\n \"code\": null,\n \"e\": 3831,\n \"s\": 3804,\n \"text\": \"Let’s verify this via code\"\n },\n {\n \"code\": null,\n \"e\": 3865,\n \"s\": 3831,\n \"text\": \"and now our new kl divergence is:\"\n },\n {\n \"code\": null,\n \"e\": 4263,\n \"s\": 3865,\n \"text\": \"Now, this z has a single dimension. But in the real world, we care about n-dimensional zs. To handle this in the implementation, we simply sum over the last dimension. The trick here is that when sampling from a univariate distribution (in this case Normal), if you sum across many of these distributions, it’s equivalent to using an n-dimensional distribution (n-dimensional Normal in this case).\"\n },\n {\n \"code\": null,\n \"e\": 4330,\n \"s\": 4263,\n \"text\": \"Here’s the kl divergence that is distribution agnostic in PyTorch.\"\n },\n {\n \"code\": null,\n \"e\": 4581,\n \"s\": 4330,\n \"text\": \"This generic form of the KL is called the monte-carlo approximation. This means we sample z many times and estimate the KL divergence. (in practice, these estimates are really good and with a batch size of 128 or more, the estimate is very accurate).\"\n },\n {\n \"code\": null,\n \"e\": 4639,\n \"s\": 4581,\n \"text\": \"The second term we’ll look at is the reconstruction term.\"\n },\n {\n \"code\": null,\n \"e\": 4807,\n \"s\": 4639,\n \"text\": \"In the KL explanation we used p(z), q(z|x). For this equation, we need to define a third distribution, P_rec(x|z). To avoid confusion we’ll use P_rec to differentiate.\"\n },\n {\n \"code\": null,\n \"e\": 4848,\n \"s\": 4807,\n \"text\": \"Tip: DO NOT confuse P_rec(x|z) and p(z).\"\n },\n {\n \"code\": null,\n \"e\": 5030,\n \"s\": 4848,\n \"text\": \"So, in this equation we again sample z from q. But now we use that z to calculate the probability of seeing the input x (ie: a color image in this case) given the z that we sampled.\"\n },\n {\n \"code\": null,\n \"e\": 5256,\n \"s\": 5030,\n \"text\": \"First we need to think of our images as having a distribution in image space. Imagine a very high dimensional distribution. For a color image that is 32x32 pixels, that means this distribution has (3x32x32 = 3072) dimensions.\"\n },\n {\n \"code\": null,\n \"e\": 5484,\n \"s\": 5256,\n \"text\": \"So, now we need a way to map the z vector (which is low dimensional) back into a super high dimensional distribution from which we can measure the probability of seeing this particular image. In VAEs, we use a decoder for that.\"\n },\n {\n \"code\": null,\n \"e\": 5790,\n \"s\": 5484,\n \"text\": \"Confusion point 3: Most tutorials show x_hat as an image. However, this is wrong. x_hat IS NOT an image. These are PARAMETERS for a distribution. But because these tutorials use MNIST, the output is already in the zero-one range and can be interpreted as an image. But with color images, this is not true.\"\n },\n {\n \"code\": null,\n \"e\": 6021,\n \"s\": 5790,\n \"text\": \"To finalize the calculation of this formula, we use x_hat to parametrize a likelihood distribution (in this case a normal again) so that we can measure the probability of the input (image) under this high dimensional distribution.\"\n },\n {\n \"code\": null,\n \"e\": 6116,\n \"s\": 6021,\n \"text\": \"ie: we are asking the same question: Given P_rec(x|z) and this image, what is the probability?\"\n },\n {\n \"code\": null,\n \"e\": 6259,\n \"s\": 6116,\n \"text\": \"Since the reconstruction term has a negative sign in front of it, we minimize it by maximizing the probability of this image under P_rec(x|z).\"\n },\n {\n \"code\": null,\n \"e\": 6565,\n \"s\": 6259,\n \"text\": \"Some things may not be obvious still from this explanation. First, each image will end up with its own q. The KL term will push all the qs towards the same p (called the prior). But if all the qs, collapse to p, then the network can cheat by just mapping everything to zero and thus the VAE will collapse.\"\n },\n {\n \"code\": null,\n \"e\": 6737,\n \"s\": 6565,\n \"text\": \"The reconstruction term, forces each q to be unique and spread out so that the image can be reconstructed correctly. This keeps all the qs from collapsing onto each other.\"\n },\n {\n \"code\": null,\n \"e\": 6868,\n \"s\": 6737,\n \"text\": \"As you can see, both terms provide a nice balance to each other. This is also why you may experience instability in training VAEs!\"\n },\n {\n \"code\": null,\n \"e\": 7073,\n \"s\": 6868,\n \"text\": \"Now that you understand the intuition behind the approach and math, let’s code up the VAE in PyTorch. For this implementation, I’ll use PyTorch Lightning which will keep the code short but still scalable.\"\n },\n {\n \"code\": null,\n \"e\": 7176,\n \"s\": 7073,\n \"text\": \"If you skipped the earlier sections, recall that we are now going to implement the following VAE loss:\"\n },\n {\n \"code\": null,\n \"e\": 7297,\n \"s\": 7176,\n \"text\": \"This equation has 3 distributions. Our code will be agnostic to the distributions, but we’ll use Normal for all of them.\"\n },\n {\n \"code\": null,\n \"e\": 7379,\n \"s\": 7297,\n \"text\": \"The first distribution: q(z|x) needs parameters which we generate via an encoder.\"\n },\n {\n \"code\": null,\n \"e\": 7588,\n \"s\": 7379,\n \"text\": \"The second distribution: p(z) is the prior which we will fix to a specific location (0,1). By fixing this distribution, the KL divergence term will force q(z|x) to move closer to p by updating the parameters.\"\n },\n {\n \"code\": null,\n \"e\": 7656,\n \"s\": 7588,\n \"text\": \"The optimization start out with two distributions like this (q, p).\"\n },\n {\n \"code\": null,\n \"e\": 7748,\n \"s\": 7656,\n \"text\": \"and over time, moves q closer to p (p is fixed as you saw, and q has learnable parameters).\"\n },\n {\n \"code\": null,\n \"e\": 7914,\n \"s\": 7748,\n \"text\": \"The third distribution: p(x|z) (usually called the reconstruction), will be used to measure the probability of seeing the image (input) given the z that was sampled.\"\n },\n {\n \"code\": null,\n \"e\": 8001,\n \"s\": 7914,\n \"text\": \"Think about this image as having 3072 dimensions (3 channels x 32 pixels x 32 pixels).\"\n },\n {\n \"code\": null,\n \"e\": 8067,\n \"s\": 8001,\n \"text\": \"So, we can now write a full class that implements this algorithm.\"\n },\n {\n \"code\": null,\n \"e\": 8286,\n \"s\": 8067,\n \"text\": \"What’s nice about Lightning is that all the hard logic is encapsulated in the training_step. This means everyone can know exactly what something is doing when it is written in Lightning by looking at the training_step.\"\n },\n {\n \"code\": null,\n \"e\": 8482,\n \"s\": 8286,\n \"text\": \"Data: The Lightning VAE is fully decoupled from the data! This means we can train on imagenet, or whatever you want. For speed and cost purposes, I’ll use cifar-10 (a much smaller image dataset).\"\n },\n {\n \"code\": null,\n \"e\": 8733,\n \"s\": 8482,\n \"text\": \"Lightning uses regular pytorch dataloaders. But it’s annoying to have to figure out transforms, and other settings to get the data in usable shape. For this, we’ll use the optional abstraction (Datamodule) which abstracts all this complexity from me.\"\n },\n {\n \"code\": null,\n \"e\": 8871,\n \"s\": 8733,\n \"text\": \"Now that we have the VAE and the data, we can train it on as many GPUs as I want. In this case, colab gives us just 1, so we’ll use that.\"\n },\n {\n \"code\": null,\n \"e\": 8903,\n \"s\": 8871,\n \"text\": \"And we’ll see training start...\"\n },\n {\n \"code\": null,\n \"e\": 8964,\n \"s\": 8903,\n \"text\": \"Even just after 18 epochs, I can look at the reconstruction.\"\n },\n {\n \"code\": null,\n \"e\": 9115,\n \"s\": 8964,\n \"text\": \"Even though we didn’t train for long, and used no fancy tricks like perceptual losses, we get something that kind of looks like samples from CIFAR-10.\"\n },\n {\n \"code\": null,\n \"e\": 9172,\n \"s\": 9115,\n \"text\": \"In the next post, I’ll cover the derivation of the ELBO!\"\n }\n]"}}},{"rowIdx":565,"cells":{"title":{"kind":"string","value":"Scrolling to element using Webdriver."},"text":{"kind":"string","value":"We can perform scrolling to an element using Selenium webdriver. This can be achieved in multiple ways. Selenium cannot handle scrolling directly. It takes the help of the Javascript Executor and Actions class to do scrolling action.\nFirst of all we have to identify the element up to which we have to scroll to with the help of any of the locators like class, id, name and so on. Next we shall take the help of the Javascript Executor to run the Javascript commands. The method executeScript is used to execute Javascript commands in Selenium. We have to use the scrollIntoView method in Javascript and pass true as an argument to the method.\nWebElement e = driver.findElement(By.name(\"name\"));\n((JavascriptExecutor) driver).executeScript(\"arguments[0].scrollIntoView(true);\", e);\nCode Implementation with Javascript Executor.\nimport org.openqa.selenium.By;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\nimport org.openqa.selenium.JavascriptExecutor;\npublic class ScrollToElementJs{\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n driver.get(\"https://www.tutorialspoint.com/index.htm\");\n driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS);\n // identify element\n WebElement m=driver.findElement(By.xpath(\"//*[text()='Careers']\"));\n // Javascript executor\n ((JavascriptExecutor)driver).executeScript(\"arguments[0].scrollIntoView (true);\", m);\n Thread.sleep(200);\n driver.close();\n }\n}\nWith the Actions class, we shall use the moveToElement method and pass the webelement locator as an argument to the method.\nCode Implementation with Actions.\nimport org.openqa.selenium.By;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\nimport org.openqa.selenium.interactions.Action;\nimport org.openqa.selenium.interactions.Actions;\npublic class ScrollToElementActions{\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n driver.get(\"https://www.tutorialspoint.com/index.htm\");\n driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);\n // identify element\n WebElement m=driver.findElement(By.xpath(\"//*[text()='Careers']\"));\n // moveToElement method with Actions class\n Actions act = new Actions(driver);\n act.moveToElement(m);\n act.perform();\n driver.close();\n }\n}"},"parsed":{"kind":"list like","value":[{"code":null,"e":1296,"s":1062,"text":"We can perform scrolling to an element using Selenium webdriver. This can be achieved in multiple ways. Selenium cannot handle scrolling directly. It takes the help of the Javascript Executor and Actions class to do scrolling action."},{"code":null,"e":1706,"s":1296,"text":"First of all we have to identify the element up to which we have to scroll to with the help of any of the locators like class, id, name and so on. Next we shall take the help of the Javascript Executor to run the Javascript commands. The method executeScript is used to execute Javascript commands in Selenium. We have to use the scrollIntoView method in Javascript and pass true as an argument to the method."},{"code":null,"e":1844,"s":1706,"text":"WebElement e = driver.findElement(By.name(\"name\"));\n((JavascriptExecutor) driver).executeScript(\"arguments[0].scrollIntoView(true);\", e);"},{"code":null,"e":1890,"s":1844,"text":"Code Implementation with Javascript Executor."},{"code":null,"e":2768,"s":1890,"text":"import org.openqa.selenium.By;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\nimport org.openqa.selenium.JavascriptExecutor;\npublic class ScrollToElementJs{\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n driver.get(\"https://www.tutorialspoint.com/index.htm\");\n driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS);\n // identify element\n WebElement m=driver.findElement(By.xpath(\"//*[text()='Careers']\"));\n // Javascript executor\n ((JavascriptExecutor)driver).executeScript(\"arguments[0].scrollIntoView (true);\", m);\n Thread.sleep(200);\n driver.close();\n }\n}"},{"code":null,"e":2892,"s":2768,"text":"With the Actions class, we shall use the moveToElement method and pass the webelement locator as an argument to the method."},{"code":null,"e":2926,"s":2892,"text":"Code Implementation with Actions."},{"code":null,"e":3851,"s":2926,"text":"import org.openqa.selenium.By;\nimport org.openqa.selenium.WebDriver;\nimport org.openqa.selenium.WebElement;\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport java.util.concurrent.TimeUnit;\nimport org.openqa.selenium.interactions.Action;\nimport org.openqa.selenium.interactions.Actions;\npublic class ScrollToElementActions{\n public static void main(String[] args) {\n System.setProperty(\"webdriver.chrome.driver\", \"C:\\\\Users\\\\ghs6kor\\\\Desktop\\\\Java\\\\chromedriver.exe\");\n WebDriver driver = new ChromeDriver();\n driver.get(\"https://www.tutorialspoint.com/index.htm\");\n driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);\n // identify element\n WebElement m=driver.findElement(By.xpath(\"//*[text()='Careers']\"));\n // moveToElement method with Actions class\n Actions act = new Actions(driver);\n act.moveToElement(m);\n act.perform();\n driver.close();\n }\n}"}],"string":"[\n {\n \"code\": null,\n \"e\": 1296,\n \"s\": 1062,\n \"text\": \"We can perform scrolling to an element using Selenium webdriver. This can be achieved in multiple ways. Selenium cannot handle scrolling directly. It takes the help of the Javascript Executor and Actions class to do scrolling action.\"\n },\n {\n \"code\": null,\n \"e\": 1706,\n \"s\": 1296,\n \"text\": \"First of all we have to identify the element up to which we have to scroll to with the help of any of the locators like class, id, name and so on. Next we shall take the help of the Javascript Executor to run the Javascript commands. The method executeScript is used to execute Javascript commands in Selenium. We have to use the scrollIntoView method in Javascript and pass true as an argument to the method.\"\n },\n {\n \"code\": null,\n \"e\": 1844,\n \"s\": 1706,\n \"text\": \"WebElement e = driver.findElement(By.name(\\\"name\\\"));\\n((JavascriptExecutor) driver).executeScript(\\\"arguments[0].scrollIntoView(true);\\\", e);\"\n },\n {\n \"code\": null,\n \"e\": 1890,\n \"s\": 1844,\n \"text\": \"Code Implementation with Javascript Executor.\"\n },\n {\n \"code\": null,\n \"e\": 2768,\n \"s\": 1890,\n \"text\": \"import org.openqa.selenium.By;\\nimport org.openqa.selenium.WebDriver;\\nimport org.openqa.selenium.WebElement;\\nimport org.openqa.selenium.chrome.ChromeDriver;\\nimport java.util.concurrent.TimeUnit;\\nimport org.openqa.selenium.JavascriptExecutor;\\npublic class ScrollToElementJs{\\n public static void main(String[] args) {\\n System.setProperty(\\\"webdriver.chrome.driver\\\", \\\"C:\\\\\\\\Users\\\\\\\\ghs6kor\\\\\\\\Desktop\\\\\\\\Java\\\\\\\\chromedriver.exe\\\");\\n WebDriver driver = new ChromeDriver();\\n driver.get(\\\"https://www.tutorialspoint.com/index.htm\\\");\\n driver.manage().timeouts().implicitlyWait(12, TimeUnit.SECONDS);\\n // identify element\\n WebElement m=driver.findElement(By.xpath(\\\"//*[text()='Careers']\\\"));\\n // Javascript executor\\n ((JavascriptExecutor)driver).executeScript(\\\"arguments[0].scrollIntoView (true);\\\", m);\\n Thread.sleep(200);\\n driver.close();\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2892,\n \"s\": 2768,\n \"text\": \"With the Actions class, we shall use the moveToElement method and pass the webelement locator as an argument to the method.\"\n },\n {\n \"code\": null,\n \"e\": 2926,\n \"s\": 2892,\n \"text\": \"Code Implementation with Actions.\"\n },\n {\n \"code\": null,\n \"e\": 3851,\n \"s\": 2926,\n \"text\": \"import org.openqa.selenium.By;\\nimport org.openqa.selenium.WebDriver;\\nimport org.openqa.selenium.WebElement;\\nimport org.openqa.selenium.chrome.ChromeDriver;\\nimport java.util.concurrent.TimeUnit;\\nimport org.openqa.selenium.interactions.Action;\\nimport org.openqa.selenium.interactions.Actions;\\npublic class ScrollToElementActions{\\n public static void main(String[] args) {\\n System.setProperty(\\\"webdriver.chrome.driver\\\", \\\"C:\\\\\\\\Users\\\\\\\\ghs6kor\\\\\\\\Desktop\\\\\\\\Java\\\\\\\\chromedriver.exe\\\");\\n WebDriver driver = new ChromeDriver();\\n driver.get(\\\"https://www.tutorialspoint.com/index.htm\\\");\\n driver.manage().timeouts().implicitlyWait(5, TimeUnit.SECONDS);\\n // identify element\\n WebElement m=driver.findElement(By.xpath(\\\"//*[text()='Careers']\\\"));\\n // moveToElement method with Actions class\\n Actions act = new Actions(driver);\\n act.moveToElement(m);\\n act.perform();\\n driver.close();\\n }\\n}\"\n }\n]"}}},{"rowIdx":566,"cells":{"title":{"kind":"string","value":"Detect an object with OpenCV-Python - GeeksforGeeks"},"text":{"kind":"string","value":"18 May, 2020\nOpenCV is the huge open-source library for computer vision, machine learning, and image processing and now it plays a major role in real-time operation which is very important in today’s systems. By using it, one can process images and videos to identify objects, faces, or even the handwriting of a human. This article focuses on detecting objects.\nNote: For more information, refer to Introduction to OpenCV.\nObject Detection is a computer technology related to computer vision, image processing, and deep learning that deals with detecting instances of objects in images and videos. We will do object detection in this article using something known as haar cascades.\nHaar Cascade classifiers are an effective way for object detection. This method was proposed by Paul Viola and Michael Jones in their paper Rapid Object Detection using a Boosted Cascade of Simple Features. Haar Cascade is a machine learning-based approach where a lot of positive and negative images are used to train the classifier.\nPositive images – These images contain the images which we want our classifier to identify.\nNegative Images – Images of everything else, which do not contain the object we want to detect.Requirements.\nSteps to download the requirements below:\nRun The following command in the terminal to install opencv.pip install opencv-python\n\npip install opencv-python\n\nRun the following command to in the terminal install the matplotlib.pip install matplotlib\n\npip install matplotlib\n\nTo download the haar cascade file and image used in the below code as a zip file click here.\nNote: Put the XML file and the PNG image in the same folder as your Python script.\nImage used:\nOpening an image\nimport cv2from matplotlib import pyplot as plt # Opening imageimg = cv2.imread(\"image.jpg\") # OpenCV opens images as BRG # but we want it as RGB and # we also need a grayscale # versionimg_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Creates the environment # of the picture and shows itplt.subplot(1, 1, 1)plt.imshow(img_rgb)plt.show()\nOutput:\nRecognition\nWe will use the detectMultiScale() function of OpenCV to recognize big signs as well as small ones:\n# Use minSize because for not # bothering with extra-small # dots that would look like STOP signsfound = stop_data.detectMultiScale(img_gray, minSize =(20, 20)) # Don't do anything if there's # no signamount_found = len(found) if amount_found != 0: # There may be more than one # sign in the image for (x, y, width, height) in found: # We draw a green rectangle around # every recognized sign cv2.rectangle(img_rgb, (x, y), (x + height, y + width), (0, 255, 0), 5)\nHere is the full script for lazy devs:\nimport cv2from matplotlib import pyplot as plt # Opening imageimg = cv2.imread(\"image.jpg\") # OpenCV opens images as BRG # but we want it as RGB We'll # also need a grayscale versionimg_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Use minSize because for not # bothering with extra-small # dots that would look like STOP signsstop_data = cv2.CascadeClassifier('stop_data.xml') found = stop_data.detectMultiScale(img_gray, minSize =(20, 20)) # Don't do anything if there's # no signamount_found = len(found) if amount_found != 0: # There may be more than one # sign in the image for (x, y, width, height) in found: # We draw a green rectangle around # every recognized sign cv2.rectangle(img_rgb, (x, y), (x + height, y + width), (0, 255, 0), 5) # Creates the environment of # the picture and shows itplt.subplot(1, 1, 1)plt.imshow(img_rgb)plt.show()\nOutput :\npriyankamore\nOpenCV\npython\nPython\npython\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nPython Dictionary\nRead a file line by line in Python\nHow to Install PIP on Windows ?\nEnumerate() in Python\nDifferent ways to create Pandas Dataframe\nIterate over a list in Python\nPython String | replace()\nReading and Writing to text files in Python\n*args and **kwargs in Python\nCreate a Pandas DataFrame from Lists"},"parsed":{"kind":"list like","value":[{"code":null,"e":25949,"s":25921,"text":"\n18 May, 2020"},{"code":null,"e":26299,"s":25949,"text":"OpenCV is the huge open-source library for computer vision, machine learning, and image processing and now it plays a major role in real-time operation which is very important in today’s systems. By using it, one can process images and videos to identify objects, faces, or even the handwriting of a human. This article focuses on detecting objects."},{"code":null,"e":26360,"s":26299,"text":"Note: For more information, refer to Introduction to OpenCV."},{"code":null,"e":26619,"s":26360,"text":"Object Detection is a computer technology related to computer vision, image processing, and deep learning that deals with detecting instances of objects in images and videos. We will do object detection in this article using something known as haar cascades."},{"code":null,"e":26954,"s":26619,"text":"Haar Cascade classifiers are an effective way for object detection. This method was proposed by Paul Viola and Michael Jones in their paper Rapid Object Detection using a Boosted Cascade of Simple Features. Haar Cascade is a machine learning-based approach where a lot of positive and negative images are used to train the classifier."},{"code":null,"e":27046,"s":26954,"text":"Positive images – These images contain the images which we want our classifier to identify."},{"code":null,"e":27155,"s":27046,"text":"Negative Images – Images of everything else, which do not contain the object we want to detect.Requirements."},{"code":null,"e":27197,"s":27155,"text":"Steps to download the requirements below:"},{"code":null,"e":27284,"s":27197,"text":"Run The following command in the terminal to install opencv.pip install opencv-python\n"},{"code":null,"e":27311,"s":27284,"text":"pip install opencv-python\n"},{"code":null,"e":27403,"s":27311,"text":"Run the following command to in the terminal install the matplotlib.pip install matplotlib\n"},{"code":null,"e":27427,"s":27403,"text":"pip install matplotlib\n"},{"code":null,"e":27520,"s":27427,"text":"To download the haar cascade file and image used in the below code as a zip file click here."},{"code":null,"e":27603,"s":27520,"text":"Note: Put the XML file and the PNG image in the same folder as your Python script."},{"code":null,"e":27615,"s":27603,"text":"Image used:"},{"code":null,"e":27632,"s":27615,"text":"Opening an image"},{"code":"import cv2from matplotlib import pyplot as plt # Opening imageimg = cv2.imread(\"image.jpg\") # OpenCV opens images as BRG # but we want it as RGB and # we also need a grayscale # versionimg_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Creates the environment # of the picture and shows itplt.subplot(1, 1, 1)plt.imshow(img_rgb)plt.show()","e":28022,"s":27632,"text":null},{"code":null,"e":28030,"s":28022,"text":"Output:"},{"code":null,"e":28042,"s":28030,"text":"Recognition"},{"code":null,"e":28142,"s":28042,"text":"We will use the detectMultiScale() function of OpenCV to recognize big signs as well as small ones:"},{"code":"# Use minSize because for not # bothering with extra-small # dots that would look like STOP signsfound = stop_data.detectMultiScale(img_gray, minSize =(20, 20)) # Don't do anything if there's # no signamount_found = len(found) if amount_found != 0: # There may be more than one # sign in the image for (x, y, width, height) in found: # We draw a green rectangle around # every recognized sign cv2.rectangle(img_rgb, (x, y), (x + height, y + width), (0, 255, 0), 5)","e":28736,"s":28142,"text":null},{"code":null,"e":28775,"s":28736,"text":"Here is the full script for lazy devs:"},{"code":"import cv2from matplotlib import pyplot as plt # Opening imageimg = cv2.imread(\"image.jpg\") # OpenCV opens images as BRG # but we want it as RGB We'll # also need a grayscale versionimg_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Use minSize because for not # bothering with extra-small # dots that would look like STOP signsstop_data = cv2.CascadeClassifier('stop_data.xml') found = stop_data.detectMultiScale(img_gray, minSize =(20, 20)) # Don't do anything if there's # no signamount_found = len(found) if amount_found != 0: # There may be more than one # sign in the image for (x, y, width, height) in found: # We draw a green rectangle around # every recognized sign cv2.rectangle(img_rgb, (x, y), (x + height, y + width), (0, 255, 0), 5) # Creates the environment of # the picture and shows itplt.subplot(1, 1, 1)plt.imshow(img_rgb)plt.show()","e":29815,"s":28775,"text":null},{"code":null,"e":29824,"s":29815,"text":"Output :"},{"code":null,"e":29837,"s":29824,"text":"priyankamore"},{"code":null,"e":29844,"s":29837,"text":"OpenCV"},{"code":null,"e":29851,"s":29844,"text":"python"},{"code":null,"e":29858,"s":29851,"text":"Python"},{"code":null,"e":29865,"s":29858,"text":"python"},{"code":null,"e":29963,"s":29865,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":29981,"s":29963,"text":"Python Dictionary"},{"code":null,"e":30016,"s":29981,"text":"Read a file line by line in Python"},{"code":null,"e":30048,"s":30016,"text":"How to Install PIP on Windows ?"},{"code":null,"e":30070,"s":30048,"text":"Enumerate() in Python"},{"code":null,"e":30112,"s":30070,"text":"Different ways to create Pandas Dataframe"},{"code":null,"e":30142,"s":30112,"text":"Iterate over a list in Python"},{"code":null,"e":30168,"s":30142,"text":"Python String | replace()"},{"code":null,"e":30212,"s":30168,"text":"Reading and Writing to text files in Python"},{"code":null,"e":30241,"s":30212,"text":"*args and **kwargs in Python"}],"string":"[\n {\n \"code\": null,\n \"e\": 25949,\n \"s\": 25921,\n \"text\": \"\\n18 May, 2020\"\n },\n {\n \"code\": null,\n \"e\": 26299,\n \"s\": 25949,\n \"text\": \"OpenCV is the huge open-source library for computer vision, machine learning, and image processing and now it plays a major role in real-time operation which is very important in today’s systems. By using it, one can process images and videos to identify objects, faces, or even the handwriting of a human. This article focuses on detecting objects.\"\n },\n {\n \"code\": null,\n \"e\": 26360,\n \"s\": 26299,\n \"text\": \"Note: For more information, refer to Introduction to OpenCV.\"\n },\n {\n \"code\": null,\n \"e\": 26619,\n \"s\": 26360,\n \"text\": \"Object Detection is a computer technology related to computer vision, image processing, and deep learning that deals with detecting instances of objects in images and videos. We will do object detection in this article using something known as haar cascades.\"\n },\n {\n \"code\": null,\n \"e\": 26954,\n \"s\": 26619,\n \"text\": \"Haar Cascade classifiers are an effective way for object detection. This method was proposed by Paul Viola and Michael Jones in their paper Rapid Object Detection using a Boosted Cascade of Simple Features. Haar Cascade is a machine learning-based approach where a lot of positive and negative images are used to train the classifier.\"\n },\n {\n \"code\": null,\n \"e\": 27046,\n \"s\": 26954,\n \"text\": \"Positive images – These images contain the images which we want our classifier to identify.\"\n },\n {\n \"code\": null,\n \"e\": 27155,\n \"s\": 27046,\n \"text\": \"Negative Images – Images of everything else, which do not contain the object we want to detect.Requirements.\"\n },\n {\n \"code\": null,\n \"e\": 27197,\n \"s\": 27155,\n \"text\": \"Steps to download the requirements below:\"\n },\n {\n \"code\": null,\n \"e\": 27284,\n \"s\": 27197,\n \"text\": \"Run The following command in the terminal to install opencv.pip install opencv-python\\n\"\n },\n {\n \"code\": null,\n \"e\": 27311,\n \"s\": 27284,\n \"text\": \"pip install opencv-python\\n\"\n },\n {\n \"code\": null,\n \"e\": 27403,\n \"s\": 27311,\n \"text\": \"Run the following command to in the terminal install the matplotlib.pip install matplotlib\\n\"\n },\n {\n \"code\": null,\n \"e\": 27427,\n \"s\": 27403,\n \"text\": \"pip install matplotlib\\n\"\n },\n {\n \"code\": null,\n \"e\": 27520,\n \"s\": 27427,\n \"text\": \"To download the haar cascade file and image used in the below code as a zip file click here.\"\n },\n {\n \"code\": null,\n \"e\": 27603,\n \"s\": 27520,\n \"text\": \"Note: Put the XML file and the PNG image in the same folder as your Python script.\"\n },\n {\n \"code\": null,\n \"e\": 27615,\n \"s\": 27603,\n \"text\": \"Image used:\"\n },\n {\n \"code\": null,\n \"e\": 27632,\n \"s\": 27615,\n \"text\": \"Opening an image\"\n },\n {\n \"code\": \"import cv2from matplotlib import pyplot as plt # Opening imageimg = cv2.imread(\\\"image.jpg\\\") # OpenCV opens images as BRG # but we want it as RGB and # we also need a grayscale # versionimg_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Creates the environment # of the picture and shows itplt.subplot(1, 1, 1)plt.imshow(img_rgb)plt.show()\",\n \"e\": 28022,\n \"s\": 27632,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 28030,\n \"s\": 28022,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 28042,\n \"s\": 28030,\n \"text\": \"Recognition\"\n },\n {\n \"code\": null,\n \"e\": 28142,\n \"s\": 28042,\n \"text\": \"We will use the detectMultiScale() function of OpenCV to recognize big signs as well as small ones:\"\n },\n {\n \"code\": \"# Use minSize because for not # bothering with extra-small # dots that would look like STOP signsfound = stop_data.detectMultiScale(img_gray, minSize =(20, 20)) # Don't do anything if there's # no signamount_found = len(found) if amount_found != 0: # There may be more than one # sign in the image for (x, y, width, height) in found: # We draw a green rectangle around # every recognized sign cv2.rectangle(img_rgb, (x, y), (x + height, y + width), (0, 255, 0), 5)\",\n \"e\": 28736,\n \"s\": 28142,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 28775,\n \"s\": 28736,\n \"text\": \"Here is the full script for lazy devs:\"\n },\n {\n \"code\": \"import cv2from matplotlib import pyplot as plt # Opening imageimg = cv2.imread(\\\"image.jpg\\\") # OpenCV opens images as BRG # but we want it as RGB We'll # also need a grayscale versionimg_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Use minSize because for not # bothering with extra-small # dots that would look like STOP signsstop_data = cv2.CascadeClassifier('stop_data.xml') found = stop_data.detectMultiScale(img_gray, minSize =(20, 20)) # Don't do anything if there's # no signamount_found = len(found) if amount_found != 0: # There may be more than one # sign in the image for (x, y, width, height) in found: # We draw a green rectangle around # every recognized sign cv2.rectangle(img_rgb, (x, y), (x + height, y + width), (0, 255, 0), 5) # Creates the environment of # the picture and shows itplt.subplot(1, 1, 1)plt.imshow(img_rgb)plt.show()\",\n \"e\": 29815,\n \"s\": 28775,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 29824,\n \"s\": 29815,\n \"text\": \"Output :\"\n },\n {\n \"code\": null,\n \"e\": 29837,\n \"s\": 29824,\n \"text\": \"priyankamore\"\n },\n {\n \"code\": null,\n \"e\": 29844,\n \"s\": 29837,\n \"text\": \"OpenCV\"\n },\n {\n \"code\": null,\n \"e\": 29851,\n \"s\": 29844,\n \"text\": \"python\"\n },\n {\n \"code\": null,\n \"e\": 29858,\n \"s\": 29851,\n \"text\": \"Python\"\n },\n {\n \"code\": null,\n \"e\": 29865,\n \"s\": 29858,\n \"text\": \"python\"\n },\n {\n \"code\": null,\n \"e\": 29963,\n \"s\": 29865,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 29981,\n \"s\": 29963,\n \"text\": \"Python Dictionary\"\n },\n {\n \"code\": null,\n \"e\": 30016,\n \"s\": 29981,\n \"text\": \"Read a file line by line in Python\"\n },\n {\n \"code\": null,\n \"e\": 30048,\n \"s\": 30016,\n \"text\": \"How to Install PIP on Windows ?\"\n },\n {\n \"code\": null,\n \"e\": 30070,\n \"s\": 30048,\n \"text\": \"Enumerate() in Python\"\n },\n {\n \"code\": null,\n \"e\": 30112,\n \"s\": 30070,\n \"text\": \"Different ways to create Pandas Dataframe\"\n },\n {\n \"code\": null,\n \"e\": 30142,\n \"s\": 30112,\n \"text\": \"Iterate over a list in Python\"\n },\n {\n \"code\": null,\n \"e\": 30168,\n \"s\": 30142,\n \"text\": \"Python String | replace()\"\n },\n {\n \"code\": null,\n \"e\": 30212,\n \"s\": 30168,\n \"text\": \"Reading and Writing to text files in Python\"\n },\n {\n \"code\": null,\n \"e\": 30241,\n \"s\": 30212,\n \"text\": \"*args and **kwargs in Python\"\n }\n]"}}},{"rowIdx":567,"cells":{"title":{"kind":"string","value":"Get a list of a specified column of a Pandas DataFrame - GeeksforGeeks"},"text":{"kind":"string","value":"28 Jul, 2020\nIn this article, we will discuss how to get a list of specified column of a Pandas Dataframe. First, we will read a csv file into a pandas dataframe. \nNote: To get the CSV file used click here.Example:\nPython3\n# importing pandas module import pandas as pd # making data frame from csvdata = pd.read_csv(\"nba.csv\") # calling head() method df = data.head(5) # displaying data df\nOutput:\nLet’s see how to get a list of a specified column of a Pandas DataFrame:We will convert the column “Name” into a list using three different ways.1. Using Series.tolist()From the dataframe, we select the column “Name” using a [] operator that returns a Series object. Next, we will use the function Series.to_list() provided by the Series class to convert the series object and return a list. \nPython3\n# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\"nba.csv\")df = data.head(5) # Converting a specific Dataframe # column to list using Series.tolist()Name_list = df[\"Name\"].tolist() print(\"Converting name to list:\") # displaying listName_list\nOutput:\nLet’s break it down and look at the types \nPython3\n# column 'Name' as series objectprint(type(df[\"Name\"])) # Convert series object to a listprint(type(df[\"Name\"].values.tolist()\nOutput:\n2. Using numpy.ndarray.tolist()From the dataframe we select the column “Name” using a [] operator that returns a Series object and uses Series.Values to get a NumPy array from the series object. Next, we will use the function tolist() provided by NumPy array to convert it to a list. \nPython3\n# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\"nba.csv\")df = data.head(5) # Converting a specific Dataframe column# to list using numpy.ndarray.tolist()Name_list = df[\"Name\"].values.tolist() print(\"Converting name to list:\") # displaying listName_list\nOutput:\nSimilarly, breaking it down \nPython3\n# Select a column from dataframe # as series and get a numpy arrayprint(type(df[\"Name\"].values)) # Convert numpy array to a listprint(type(df[\"Name\"].values.tolist()\nOutput:\n3. Using Python list() function You can also use the Python list() function with an optional iterable parameter to convert a column into a list. \nPython3\n# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\"nba.csv\")df = data.head(5) # Converting a specific Dataframe# column to list using list()# function in PythonName_List = list(df[\"Name\"]) print(\"Converting name to list:\") # displaying listName_List\nOutput:\nConverting index column to list Index column can be converted to list, by calling pandas.DataFrame.index which returns the index column as an array and then calling index_column.tolist() which converts index_column into a list. \nPython3\n# Converting index column to listindex_list = df.index.tolist() print(\"Converting index to list:\") # display index as listindex_list\nOutput:\npandas-dataframe-program\nPython pandas-dataFrame\nPython Pandas-exercise\nPython-pandas\nPython\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nComments\nOld Comments\nHow to Install PIP on Windows ?\nHow to drop one or multiple columns in Pandas Dataframe\nHow To Convert Python Dictionary To JSON?\nCheck if element exists in list in Python\nPython | Pandas dataframe.groupby()\nDefaultdict in Python\nPython | Get unique values from a list\nPython Classes and Objects\nPython | os.path.join() method\nCreate a directory in Python"},"parsed":{"kind":"list like","value":[{"code":null,"e":23926,"s":23898,"text":"\n28 Jul, 2020"},{"code":null,"e":24077,"s":23926,"text":"In this article, we will discuss how to get a list of specified column of a Pandas Dataframe. First, we will read a csv file into a pandas dataframe. "},{"code":null,"e":24128,"s":24077,"text":"Note: To get the CSV file used click here.Example:"},{"code":null,"e":24136,"s":24128,"text":"Python3"},{"code":"# importing pandas module import pandas as pd # making data frame from csvdata = pd.read_csv(\"nba.csv\") # calling head() method df = data.head(5) # displaying data df","e":24316,"s":24136,"text":null},{"code":null,"e":24324,"s":24316,"text":"Output:"},{"code":null,"e":24717,"s":24324,"text":"Let’s see how to get a list of a specified column of a Pandas DataFrame:We will convert the column “Name” into a list using three different ways.1. Using Series.tolist()From the dataframe, we select the column “Name” using a [] operator that returns a Series object. Next, we will use the function Series.to_list() provided by the Series class to convert the series object and return a list. "},{"code":null,"e":24725,"s":24717,"text":"Python3"},{"code":"# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\"nba.csv\")df = data.head(5) # Converting a specific Dataframe # column to list using Series.tolist()Name_list = df[\"Name\"].tolist() print(\"Converting name to list:\") # displaying listName_list","e":25014,"s":24725,"text":null},{"code":null,"e":25022,"s":25014,"text":"Output:"},{"code":null,"e":25066,"s":25022,"text":"Let’s break it down and look at the types "},{"code":null,"e":25074,"s":25066,"text":"Python3"},{"code":"# column 'Name' as series objectprint(type(df[\"Name\"])) # Convert series object to a listprint(type(df[\"Name\"].values.tolist()","e":25202,"s":25074,"text":null},{"code":null,"e":25210,"s":25202,"text":"Output:"},{"code":null,"e":25495,"s":25210,"text":"2. Using numpy.ndarray.tolist()From the dataframe we select the column “Name” using a [] operator that returns a Series object and uses Series.Values to get a NumPy array from the series object. Next, we will use the function tolist() provided by NumPy array to convert it to a list. "},{"code":null,"e":25503,"s":25495,"text":"Python3"},{"code":"# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\"nba.csv\")df = data.head(5) # Converting a specific Dataframe column# to list using numpy.ndarray.tolist()Name_list = df[\"Name\"].values.tolist() print(\"Converting name to list:\") # displaying listName_list","e":25805,"s":25503,"text":null},{"code":null,"e":25813,"s":25805,"text":"Output:"},{"code":null,"e":25843,"s":25813,"text":"Similarly, breaking it down "},{"code":null,"e":25851,"s":25843,"text":"Python3"},{"code":"# Select a column from dataframe # as series and get a numpy arrayprint(type(df[\"Name\"].values)) # Convert numpy array to a listprint(type(df[\"Name\"].values.tolist()","e":26018,"s":25851,"text":null},{"code":null,"e":26026,"s":26018,"text":"Output:"},{"code":null,"e":26172,"s":26026,"text":"3. Using Python list() function You can also use the Python list() function with an optional iterable parameter to convert a column into a list. "},{"code":null,"e":26180,"s":26172,"text":"Python3"},{"code":"# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\"nba.csv\")df = data.head(5) # Converting a specific Dataframe# column to list using list()# function in PythonName_List = list(df[\"Name\"]) print(\"Converting name to list:\") # displaying listName_List","e":26476,"s":26180,"text":null},{"code":null,"e":26484,"s":26476,"text":"Output:"},{"code":null,"e":26714,"s":26484,"text":"Converting index column to list Index column can be converted to list, by calling pandas.DataFrame.index which returns the index column as an array and then calling index_column.tolist() which converts index_column into a list. "},{"code":null,"e":26722,"s":26714,"text":"Python3"},{"code":"# Converting index column to listindex_list = df.index.tolist() print(\"Converting index to list:\") # display index as listindex_list","e":26857,"s":26722,"text":null},{"code":null,"e":26865,"s":26857,"text":"Output:"},{"code":null,"e":26890,"s":26865,"text":"pandas-dataframe-program"},{"code":null,"e":26914,"s":26890,"text":"Python pandas-dataFrame"},{"code":null,"e":26937,"s":26914,"text":"Python Pandas-exercise"},{"code":null,"e":26951,"s":26937,"text":"Python-pandas"},{"code":null,"e":26958,"s":26951,"text":"Python"},{"code":null,"e":27056,"s":26958,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":27065,"s":27056,"text":"Comments"},{"code":null,"e":27078,"s":27065,"text":"Old Comments"},{"code":null,"e":27110,"s":27078,"text":"How to Install PIP on Windows ?"},{"code":null,"e":27166,"s":27110,"text":"How to drop one or multiple columns in Pandas Dataframe"},{"code":null,"e":27208,"s":27166,"text":"How To Convert Python Dictionary To JSON?"},{"code":null,"e":27250,"s":27208,"text":"Check if element exists in list in Python"},{"code":null,"e":27286,"s":27250,"text":"Python | Pandas dataframe.groupby()"},{"code":null,"e":27308,"s":27286,"text":"Defaultdict in Python"},{"code":null,"e":27347,"s":27308,"text":"Python | Get unique values from a list"},{"code":null,"e":27374,"s":27347,"text":"Python Classes and Objects"},{"code":null,"e":27405,"s":27374,"text":"Python | os.path.join() method"}],"string":"[\n {\n \"code\": null,\n \"e\": 23926,\n \"s\": 23898,\n \"text\": \"\\n28 Jul, 2020\"\n },\n {\n \"code\": null,\n \"e\": 24077,\n \"s\": 23926,\n \"text\": \"In this article, we will discuss how to get a list of specified column of a Pandas Dataframe. First, we will read a csv file into a pandas dataframe. \"\n },\n {\n \"code\": null,\n \"e\": 24128,\n \"s\": 24077,\n \"text\": \"Note: To get the CSV file used click here.Example:\"\n },\n {\n \"code\": null,\n \"e\": 24136,\n \"s\": 24128,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# importing pandas module import pandas as pd # making data frame from csvdata = pd.read_csv(\\\"nba.csv\\\") # calling head() method df = data.head(5) # displaying data df\",\n \"e\": 24316,\n \"s\": 24136,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 24324,\n \"s\": 24316,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 24717,\n \"s\": 24324,\n \"text\": \"Let’s see how to get a list of a specified column of a Pandas DataFrame:We will convert the column “Name” into a list using three different ways.1. Using Series.tolist()From the dataframe, we select the column “Name” using a [] operator that returns a Series object. Next, we will use the function Series.to_list() provided by the Series class to convert the series object and return a list. \"\n },\n {\n \"code\": null,\n \"e\": 24725,\n \"s\": 24717,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\\\"nba.csv\\\")df = data.head(5) # Converting a specific Dataframe # column to list using Series.tolist()Name_list = df[\\\"Name\\\"].tolist() print(\\\"Converting name to list:\\\") # displaying listName_list\",\n \"e\": 25014,\n \"s\": 24725,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25022,\n \"s\": 25014,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25066,\n \"s\": 25022,\n \"text\": \"Let’s break it down and look at the types \"\n },\n {\n \"code\": null,\n \"e\": 25074,\n \"s\": 25066,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# column 'Name' as series objectprint(type(df[\\\"Name\\\"])) # Convert series object to a listprint(type(df[\\\"Name\\\"].values.tolist()\",\n \"e\": 25202,\n \"s\": 25074,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25210,\n \"s\": 25202,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25495,\n \"s\": 25210,\n \"text\": \"2. Using numpy.ndarray.tolist()From the dataframe we select the column “Name” using a [] operator that returns a Series object and uses Series.Values to get a NumPy array from the series object. Next, we will use the function tolist() provided by NumPy array to convert it to a list. \"\n },\n {\n \"code\": null,\n \"e\": 25503,\n \"s\": 25495,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\\\"nba.csv\\\")df = data.head(5) # Converting a specific Dataframe column# to list using numpy.ndarray.tolist()Name_list = df[\\\"Name\\\"].values.tolist() print(\\\"Converting name to list:\\\") # displaying listName_list\",\n \"e\": 25805,\n \"s\": 25503,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25813,\n \"s\": 25805,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25843,\n \"s\": 25813,\n \"text\": \"Similarly, breaking it down \"\n },\n {\n \"code\": null,\n \"e\": 25851,\n \"s\": 25843,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# Select a column from dataframe # as series and get a numpy arrayprint(type(df[\\\"Name\\\"].values)) # Convert numpy array to a listprint(type(df[\\\"Name\\\"].values.tolist()\",\n \"e\": 26018,\n \"s\": 25851,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26026,\n \"s\": 26018,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 26172,\n \"s\": 26026,\n \"text\": \"3. Using Python list() function You can also use the Python list() function with an optional iterable parameter to convert a column into a list. \"\n },\n {\n \"code\": null,\n \"e\": 26180,\n \"s\": 26172,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# importing pandas moduleimport pandas as pd # making data frame from csvdata = pd.read_csv(\\\"nba.csv\\\")df = data.head(5) # Converting a specific Dataframe# column to list using list()# function in PythonName_List = list(df[\\\"Name\\\"]) print(\\\"Converting name to list:\\\") # displaying listName_List\",\n \"e\": 26476,\n \"s\": 26180,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26484,\n \"s\": 26476,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 26714,\n \"s\": 26484,\n \"text\": \"Converting index column to list Index column can be converted to list, by calling pandas.DataFrame.index which returns the index column as an array and then calling index_column.tolist() which converts index_column into a list. \"\n },\n {\n \"code\": null,\n \"e\": 26722,\n \"s\": 26714,\n \"text\": \"Python3\"\n },\n {\n \"code\": \"# Converting index column to listindex_list = df.index.tolist() print(\\\"Converting index to list:\\\") # display index as listindex_list\",\n \"e\": 26857,\n \"s\": 26722,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26865,\n \"s\": 26857,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 26890,\n \"s\": 26865,\n \"text\": \"pandas-dataframe-program\"\n },\n {\n \"code\": null,\n \"e\": 26914,\n \"s\": 26890,\n \"text\": \"Python pandas-dataFrame\"\n },\n {\n \"code\": null,\n \"e\": 26937,\n \"s\": 26914,\n \"text\": \"Python Pandas-exercise\"\n },\n {\n \"code\": null,\n \"e\": 26951,\n \"s\": 26937,\n \"text\": \"Python-pandas\"\n },\n {\n \"code\": null,\n \"e\": 26958,\n \"s\": 26951,\n \"text\": \"Python\"\n },\n {\n \"code\": null,\n \"e\": 27056,\n \"s\": 26958,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 27065,\n \"s\": 27056,\n \"text\": \"Comments\"\n },\n {\n \"code\": null,\n \"e\": 27078,\n \"s\": 27065,\n \"text\": \"Old Comments\"\n },\n {\n \"code\": null,\n \"e\": 27110,\n \"s\": 27078,\n \"text\": \"How to Install PIP on Windows ?\"\n },\n {\n \"code\": null,\n \"e\": 27166,\n \"s\": 27110,\n \"text\": \"How to drop one or multiple columns in Pandas Dataframe\"\n },\n {\n \"code\": null,\n \"e\": 27208,\n \"s\": 27166,\n \"text\": \"How To Convert Python Dictionary To JSON?\"\n },\n {\n \"code\": null,\n \"e\": 27250,\n \"s\": 27208,\n \"text\": \"Check if element exists in list in Python\"\n },\n {\n \"code\": null,\n \"e\": 27286,\n \"s\": 27250,\n \"text\": \"Python | Pandas dataframe.groupby()\"\n },\n {\n \"code\": null,\n \"e\": 27308,\n \"s\": 27286,\n \"text\": \"Defaultdict in Python\"\n },\n {\n \"code\": null,\n \"e\": 27347,\n \"s\": 27308,\n \"text\": \"Python | Get unique values from a list\"\n },\n {\n \"code\": null,\n \"e\": 27374,\n \"s\": 27347,\n \"text\": \"Python Classes and Objects\"\n },\n {\n \"code\": null,\n \"e\": 27405,\n \"s\": 27374,\n \"text\": \"Python | os.path.join() method\"\n }\n]"}}},{"rowIdx":568,"cells":{"title":{"kind":"string","value":"What is the difference between class and typeof function in R?"},"text":{"kind":"string","value":"The class function in R helps us to understand the type of object, for example the output of class for a data frame is integer and the typeof of the same object is list because data frames are stored as list in the memory but they are represented as a data frame. Check out the below examples with multiple type of objects to understand the differences.\n Live Demo\nx1<-rpois(20,2)\ntypeof(x1)\n[1] \"integer\"\nclass(x1)\n[1] \"integer\"\ndf1<-data.frame(x1)\ndf1\n x1\n1 1\n2 4\n3 1\n4 0\n5 2\n6 2\n7 4\n8 2\n9 3\n10 4\n11 1\n12 4\n13 0\n14 5\n15 2\n16 2\n17 0\n18 4\n19 3\n20 3\ntypeof(df1)\n[1] \"list\"\nclass(x1)\n[1] \"integer\"\n Live Demo\nM<-matrix(rnorm(40),ncol=2)\nM\n [,1] [,2]\n[1,] 2.02437789 -0.9161853\n[2,] -0.60108978 -0.8972007\n[3,] 1.27916953 0.1017923\n[4,] 1.06998017 1.4839931\n[5,] 0.22298522 0.6160919\n[6,] -0.29346341 -0.3975116\n[7,] 2.07012097 -0.7900820\n[8,] 0.36719470 -0.1100298\n[9,] -0.69522122 -1.9198172\n[10,] 2.07822428 0.2517532\n[11,] -1.56267422 1.8295022\n[12,] -1.07488221 1.2666054\n[13,] -0.79381494 -1.0993693\n[14,] -0.16027224 -1.1814177\n[15,] 0.67561791 0.7309281\n[16,] -1.40912018 -0.3307749\n[17,] -0.77769513 0.5527600\n[18,] 0.47050704 0.1075593\n[19,] -0.46616151 -0.5079660\n[20,] -0.01944371 0.1553333\ntypeof(M)\n[1] \"double\"\nclass(M)\n[1] \"matrix\" \"array\""},"parsed":{"kind":"list like","value":[{"code":null,"e":1416,"s":1062,"text":"The class function in R helps us to understand the type of object, for example the output of class for a data frame is integer and the typeof of the same object is list because data frames are stored as list in the memory but they are represented as a data frame. Check out the below examples with multiple type of objects to understand the differences."},{"code":null,"e":1427,"s":1416,"text":" Live Demo"},{"code":null,"e":1454,"s":1427,"text":"x1<-rpois(20,2)\ntypeof(x1)"},{"code":null,"e":1468,"s":1454,"text":"[1] \"integer\""},{"code":null,"e":1478,"s":1468,"text":"class(x1)"},{"code":null,"e":1492,"s":1478,"text":"[1] \"integer\""},{"code":null,"e":1516,"s":1492,"text":"df1<-data.frame(x1)\ndf1"},{"code":null,"e":1622,"s":1516,"text":" x1\n1 1\n2 4\n3 1\n4 0\n5 2\n6 2\n7 4\n8 2\n9 3\n10 4\n11 1\n12 4\n13 0\n14 5\n15 2\n16 2\n17 0\n18 4\n19 3\n20 3"},{"code":null,"e":1634,"s":1622,"text":"typeof(df1)"},{"code":null,"e":1645,"s":1634,"text":"[1] \"list\""},{"code":null,"e":1655,"s":1645,"text":"class(x1)"},{"code":null,"e":1669,"s":1655,"text":"[1] \"integer\""},{"code":null,"e":1680,"s":1669,"text":" Live Demo"},{"code":null,"e":1710,"s":1680,"text":"M<-matrix(rnorm(40),ncol=2)\nM"},{"code":null,"e":2337,"s":1710,"text":" [,1] [,2]\n[1,] 2.02437789 -0.9161853\n[2,] -0.60108978 -0.8972007\n[3,] 1.27916953 0.1017923\n[4,] 1.06998017 1.4839931\n[5,] 0.22298522 0.6160919\n[6,] -0.29346341 -0.3975116\n[7,] 2.07012097 -0.7900820\n[8,] 0.36719470 -0.1100298\n[9,] -0.69522122 -1.9198172\n[10,] 2.07822428 0.2517532\n[11,] -1.56267422 1.8295022\n[12,] -1.07488221 1.2666054\n[13,] -0.79381494 -1.0993693\n[14,] -0.16027224 -1.1814177\n[15,] 0.67561791 0.7309281\n[16,] -1.40912018 -0.3307749\n[17,] -0.77769513 0.5527600\n[18,] 0.47050704 0.1075593\n[19,] -0.46616151 -0.5079660\n[20,] -0.01944371 0.1553333"},{"code":null,"e":2347,"s":2337,"text":"typeof(M)"},{"code":null,"e":2360,"s":2347,"text":"[1] \"double\""},{"code":null,"e":2369,"s":2360,"text":"class(M)"},{"code":null,"e":2390,"s":2369,"text":"[1] \"matrix\" \"array\""}],"string":"[\n {\n \"code\": null,\n \"e\": 1416,\n \"s\": 1062,\n \"text\": \"The class function in R helps us to understand the type of object, for example the output of class for a data frame is integer and the typeof of the same object is list because data frames are stored as list in the memory but they are represented as a data frame. Check out the below examples with multiple type of objects to understand the differences.\"\n },\n {\n \"code\": null,\n \"e\": 1427,\n \"s\": 1416,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 1454,\n \"s\": 1427,\n \"text\": \"x1<-rpois(20,2)\\ntypeof(x1)\"\n },\n {\n \"code\": null,\n \"e\": 1468,\n \"s\": 1454,\n \"text\": \"[1] \\\"integer\\\"\"\n },\n {\n \"code\": null,\n \"e\": 1478,\n \"s\": 1468,\n \"text\": \"class(x1)\"\n },\n {\n \"code\": null,\n \"e\": 1492,\n \"s\": 1478,\n \"text\": \"[1] \\\"integer\\\"\"\n },\n {\n \"code\": null,\n \"e\": 1516,\n \"s\": 1492,\n \"text\": \"df1<-data.frame(x1)\\ndf1\"\n },\n {\n \"code\": null,\n \"e\": 1622,\n \"s\": 1516,\n \"text\": \" x1\\n1 1\\n2 4\\n3 1\\n4 0\\n5 2\\n6 2\\n7 4\\n8 2\\n9 3\\n10 4\\n11 1\\n12 4\\n13 0\\n14 5\\n15 2\\n16 2\\n17 0\\n18 4\\n19 3\\n20 3\"\n },\n {\n \"code\": null,\n \"e\": 1634,\n \"s\": 1622,\n \"text\": \"typeof(df1)\"\n },\n {\n \"code\": null,\n \"e\": 1645,\n \"s\": 1634,\n \"text\": \"[1] \\\"list\\\"\"\n },\n {\n \"code\": null,\n \"e\": 1655,\n \"s\": 1645,\n \"text\": \"class(x1)\"\n },\n {\n \"code\": null,\n \"e\": 1669,\n \"s\": 1655,\n \"text\": \"[1] \\\"integer\\\"\"\n },\n {\n \"code\": null,\n \"e\": 1680,\n \"s\": 1669,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 1710,\n \"s\": 1680,\n \"text\": \"M<-matrix(rnorm(40),ncol=2)\\nM\"\n },\n {\n \"code\": null,\n \"e\": 2337,\n \"s\": 1710,\n \"text\": \" [,1] [,2]\\n[1,] 2.02437789 -0.9161853\\n[2,] -0.60108978 -0.8972007\\n[3,] 1.27916953 0.1017923\\n[4,] 1.06998017 1.4839931\\n[5,] 0.22298522 0.6160919\\n[6,] -0.29346341 -0.3975116\\n[7,] 2.07012097 -0.7900820\\n[8,] 0.36719470 -0.1100298\\n[9,] -0.69522122 -1.9198172\\n[10,] 2.07822428 0.2517532\\n[11,] -1.56267422 1.8295022\\n[12,] -1.07488221 1.2666054\\n[13,] -0.79381494 -1.0993693\\n[14,] -0.16027224 -1.1814177\\n[15,] 0.67561791 0.7309281\\n[16,] -1.40912018 -0.3307749\\n[17,] -0.77769513 0.5527600\\n[18,] 0.47050704 0.1075593\\n[19,] -0.46616151 -0.5079660\\n[20,] -0.01944371 0.1553333\"\n },\n {\n \"code\": null,\n \"e\": 2347,\n \"s\": 2337,\n \"text\": \"typeof(M)\"\n },\n {\n \"code\": null,\n \"e\": 2360,\n \"s\": 2347,\n \"text\": \"[1] \\\"double\\\"\"\n },\n {\n \"code\": null,\n \"e\": 2369,\n \"s\": 2360,\n \"text\": \"class(M)\"\n },\n {\n \"code\": null,\n \"e\": 2390,\n \"s\": 2369,\n \"text\": \"[1] \\\"matrix\\\" \\\"array\\\"\"\n }\n]"}}},{"rowIdx":569,"cells":{"title":{"kind":"string","value":"Workflow Tools for ML Pipelines. Chapter 5 excerpt of “Data Science in... | by Ben Weber | Towards Data Science"},"text":{"kind":"string","value":"Airflow is becoming the industry standard for authoring data engineering and model pipeline workflows. This chapter of my book explores the process of taking a simple pipeline that runs on a single EC2 instance to a fully-managed Kubernetes ecosystem responsible for scheduling tasks. This posts omits the sections on the fully-managed solutions with GKE and Cloud Composer.\nleanpub.com\nModel pipelines are usually part of a broader data platform that provides data sources, such as lakes and warehouses, and data stores, such as an application database. When building a pipeline, it’s useful to be able to schedule a task to run, ensure that any dependencies for the pipeline have already completed, and to backfill historic data if needed. While it’s possible to perform these types of tasks manually, there are a variety of tools that have been developed to improve the management of data science workflows.\nIn this chapter, we’ll explore a batch model pipeline that performs a sequence of tasks in order to train and store results for a propensity model. This is a different type of task than the deployments we’ve explored so far, which have focused on serving real-time model predictions as a web endpoint. In a batch process, you perform a set of operations that store model results that are later served by a different application. For example, a batch model pipeline may predict which users in a game are likely to churn, and a game server fetches predictions for each user that starts a session and provides personalized offers.\nWhen building batch model pipelines for production systems, it’s important to make sure that issues with the pipeline are quickly resolved. For example, if the model pipeline is unable to fetch the most recent data for a set of users due to an upstream failure with a database, it’s useful to have a system in place that can send alerts to the team that owns the pipeline and that can rerun portions of the model pipeline in order to resolve any issues with the prerequisite data or model outputs.\nWorkflow tools provide a solution for managing these types of problems in model pipelines. With a workflow tool, you specify the operations that need to be completed, identify dependencies between the operations, and then schedule the operations to be performed by the tool. A workflow tool is responsible for running tasks, provisioning resources, and monitoring the status of tasks. There’s a number of open source tools for building workflows including AirFlow, Luigi, MLflow, and Pentaho Kettle. We’ll focus on Airflow, because it is being widely adopted across companies and cloud platforms and are also providing fully-managed versions of Airflow.\nIn this chapter, we’ll build a batch model pipeline that runs as a Docker container. Next, we’ll schedule the task to run on an EC2 instance using cron, and then explore a managed version of cron using Kubernetes. In the third section, we’ll use Airflow to define a graph of operations to perform in order to run our model pipeline, and explore a cloud offering of Airflow.\nA common workflow for batch model pipelines is to extract data from a data lake or data warehouse, train a model on historic user behavior, predict future user behavior for more recent data, and then save the results to a data warehouse or application database. In the gaming industry, this is a workflow I’ve seen used for building likelihood to purchase and likelihood to churn models, where the game servers use these predictions to provide different treatments to users based on the model predictions. Usually libraries like sklearn are used to develop models, and languages such as PySpark are used to scale up to the full player base.\nIt is typical for model pipelines to require other ETLs to run in a data platform before the pipeline can run on the most recent data. For example, there may be an upstream step in the data platform that translates json strings into schematized events that are used as input for a model. In this situation, it might be necessary to rerun the pipeline on a day that issues occurred with the json transformation process. For this section, we’ll avoid this complication by using a static input data source, but the tools that we’ll explore provide the functionality needed to handle these issues.\nThere’s typically two types of batch model pipelines that can I’ve seen deployed in the gaming industry:\nPersistent: A separate training workflow is used to train models from the one used to build predictions. A model is persisted between training runs and loaded in the serving workflow.\nTransient: The same workflow is used for training and serving predictions, and instead of saving the model as a file, the model is rebuilt for each run.\nIn this section we’ll build a transient batch pipeline, where a new model is retrained with each run. This approach generally results in more compute resources being used if the training process is heavyweight, but it helps avoid issues with model drift, which we’ll discuss in Chapter 11. We’ll author a pipeline that performs the following steps:\nFetches a dataset from GitHubTrains a logistic regression modelApplies the regression modelSaves the results to BigQuery\nFetches a dataset from GitHub\nTrains a logistic regression model\nApplies the regression model\nSaves the results to BigQuery\nThe pipeline will execute as a single Python script that performs all of these steps. For situations where you want to use intermediate outputs from steps across multiple tasks, it’s useful to decompose the pipeline into multiple processes that are integrated through a workflow tool such as Airflow.\nWe’ll build this script by first writing a Python script that runs on an EC2 instance, and then Dockerize the script so that we can use the container in workflows. To get started, we need to install a library for writing a Pandas data frame to BigQuery:\npip install --user pandas_gbq\nNext, we’ll create a file called pipeline.py that performs the four pipeline steps identified above.. The script shown below performs these steps by loading the necessary libraries, fetching the CSV file from GitHub into a Pandas data frame, splits the data frame into train and test groups to simulate historic and more recent users, builds a logistic regression model using the training data set, creates predictions for the test data set, and saves the resulting data frame to BigQuery.\nimport pandas as pdimport numpy as npfrom google.oauth2 import service_accountfrom sklearn.linear_model import LogisticRegressionfrom datetime import datetimeimport pandas_gbq # fetch the data set and add IDs gamesDF = pd.read_csv(\"https://github.com/bgweber/Twitch/raw/ master/Recommendations/games-expand.csv\")gamesDF['User_ID'] = gamesDF.index gamesDF['New_User'] = np.floor(np.random.randint(0, 10, gamesDF.shape[0])/9)# train and test groups train = gamesDF[gamesDF['New_User'] == 0]x_train = train.iloc[:,0:10]y_train = train['label']test = gameDF[gamesDF['New_User'] == 1]x_test = test.iloc[:,0:10]# build a modelmodel = LogisticRegression()model.fit(x_train, y_train)y_pred = model.predict_proba(x_test)[:, 1]# build a predictions data frameresultDF = pd.DataFrame({'User_ID':test['User_ID'], 'Pred':y_pred}) resultDF['time'] = str(datetime. now())# save predictions to BigQuery table_id = \"dsp_demo.user_scores\"project_id = \"gameanalytics-123\"credentials = service_account.Credentials. from_service_account_file('dsdemo.json')pandas_gbq.to_gbq(resultDF, table_id, project_id=project_id, if_exists = 'replace', credentials=credentials)\nTo simulate a real-world data set, the script assigns a User_ID attribute to each record, which represents a unique ID to track different users in a system. The script also splits users into historic and recent groups by assigning a New_User attribute. After building predictions for each of the recent users, we create a results data frame with the user ID, the model predictIon, and a timestamp. It’s useful to apply timestamps to predictions in order to determine if the pipeline has completed successfully. To test the model pipeline, run the following statements on the command line:\nexport GOOGLE_APPLICATION_CREDENTIALS= /home/ec2-user/dsdemo.jsonpython3 pipeline.py\nIf successfully, the script should create a new data set on BigQuery called dsp_demo, create a new table called user_users, and fill the table with user predictions. To test if data was actually populated in BigQuery, run the following commands in Jupyter:\nfrom google.cloud import bigqueryclient = bigquery.Client()sql = \"select * from dsp_demo.user_scores\"client.query(sql).to_dataframe().head()\nThis script will set up a client for connecting to BigQuery and then display the result set of the query submitted to BigQuery. You can also browse to the BigQuery web UI to inspect the results of the pipeline, as shown in Figure 5.1. We now have a script that can fetch data, apply a machine learning model, and save the results as a single process.\nWith many workflow tools, you can run Python code or bash scripts directly, but it’s good to set up isolated environments for executing scripts in order to avoid dependency conflicts for different libraries and runtimes. Luckily, we explored a tool for this in Chapter 4 and can use Docker with workflow tools. It’s useful to wrap Python scripts in Docker for workflow tools, because you can add libraries that may not be installed on the system responsible for scheduling, you can avoid issues with Python version conflicts, and containers are becoming a common way of defining tasks in workflow tools.\nTo containerize our workflow, we need to define a Dockerfile, as shown below. Since we are building out a new Python environment from scratch, we’ll need to install Pandas, sklearn, and the BigQuery library. We also need to copy credentials from the EC2 instance into the container so that we can run the export command for authenticating with GCP. This works for short term deployments, but for longer running containers it’s better to run the export in the instantiated container rather than copying static credentials into images. The Dockerfile lists out the Python libraries needed to run the script, copies in the local files needed for execution, exports credentials, and specifies the script to run.\nFROM ubuntu:latestMAINTAINER Ben Weber RUN apt-get update \\ && apt-get install -y python3-pip python3-dev \\ && cd /usr/local/bin \\ && ln -s /usr/bin/python3 python \\ && pip3 install pandas \\ && pip3 install sklearn \\ && pip3 install pandas_gbq COPY pipeline.py pipeline.py COPY /home/ec2-user/dsdemo.json dsdemo.jsonRUN export GOOGLE_APPLICATION_CREDENTIALS=/dsdemo.jsonENTRYPOINT [\"python3\",\"pipeline.py\"]\nBefore deploying this script to production, we need to build an image from the script and test a sample run. The commands below show how to build an image from the Dockerfile, list the Docker images, and run an instance of the model pipeline image.\nsudo docker image build -t \"sklearn_pipeline\" .sudo docker imagessudo docker run sklearn_pipeline\nAfter running the last command, the containerized pipeline should update the model predictions in BigQuery. We now have a model pipeline that we can run as a single bash command, which we now need to schedule to run at a specific frequency. For testing purposes, we’ll run the script every minute, but in practice models are typically executed hourly, daily, or weekly.\nA common requirement for model pipelines is running a task at a regular frequency, such as every day or every hour. Cron is a utility that provides scheduling functionality for machines running the Linux operating system. You can Set up a scheduled task using the crontab utility and assign a cron expression that defines how frequently to run the command. Cron jobs run directly on the machine where cron is utilized, and can make use of the runtimes and libraries installed on the system.\nThere are a number of challenges with using cron in production-grade systems, but it’s a great way to get started with scheduling a small number of tasks and it’s good to learn the cron expression syntax that is used in many scheduling systems. The main issue with the cron utility is that it runs on a single machine, and does not natively integrate with tools such as version control. If your machine goes down, then you’ll need to recreate your environment and update your cron table on a new machine.\nA cron expression defines how frequently to run a command. It is a sequence of 5 numbers that define when to execute for different time granularities, and it can include wildcards to always run for certain time periods. A few sample expresions are shown in the snippet below:\n# run every minute * * * * * # Run at 10am UTC everyday0 10 * * * # Run at 04:15 on Saturday15 4 * * 6\nWhen getting started with cron, it’s good to use tools to validate your expressions. Cron expressions are used in Airflow and many other scheduling systems.\nWe can use cron to schedule our model pipeline to run on a regular frequency. To schedule a command to run, run the following command on the console:\ncrontab -e\nThis command will open up the cron table file for editing in vi. To schedule the pipeline to run every minute, add the following commands to the file and save.\n# run every minute * * * * * sudo docker run sklearn_pipeline\nAfter exiting the editor, the cron table will be updated with the new command to run. The second part of the cron statement is the command to run. when defining the command to run, it’s useful to include full file paths. With Docker, we just need to define the image to run. To check that the script is actually executing, browse to the BigQuery UI and check the time column on the user_scores model output table.\nWe now have a utility for scheduling our model pipeline on a regular schedule. However, if the machine goes down then our pipeline will fail to execute. To handle this situation, it’s good to explore cloud offerings with cron scheduling capabilities.\nCron is useful for simple pipelines, but runs into challenges when tasks have dependencies on other tasks which can fail. To help resolve this issue, where tasks have dependencies and only portions of a pipeline need to be rerun, we can leverage workflow tools. Apache Airflow is currently the most popular tool, but other open source projects are available and provide similar functionality including Luigi and MLflow.\nThere are a few situations where workflow tools provide benefits over using cron directly:\nDependencies: Workflow tools define graphs of operations, which makes dependencies explicit.\nBackfills: It may be necessary to run an ETL on old data, for a range of different dates.\nVersioning: Most workflow tools integrate with version control systems to manage graphs.\nAlerting: These tools can send out emails or generate PageDuty alerts when failures occur.\nWorkflow tools are particularly useful in environments where different teams are scheduling tasks. For example, many game companies have data scientists that schedule model pipelines which are dependent on ETLs scheduled by a seperate engineering team.\nIn this section, we’ll schedule our task to run an EC2 instance using hosted Airflow, and then explore a fully-managed version of Airflow on GCP.\nAirflow is an open source workflow tool that was originally developed by Airbnb and publically released in 2015. It helps solve a challenge that many companies face, which is scheduling tasks that have many dependencies. One of the core concepts in this tool is a graph that defines the tasks to perform and the relationships between these tasks.\nIn Airflow, a graph is referred to as a DAG, which is an acronym for directed acyclic graph. A DAG is a set of tasks to perform, where each task has zero or more upstream dependencies. One of the constraints is that cycles are not allowed, where two tasks have upstream dependencies on each other.\nDAGs are set up using Python code, which is one of the differences from other workflow tools such as Pentaho Kettle which is GUI focused. The Airflow approach is called “configuration as code”, because a Python script defines the operations to perform within a workflow graph. Using code instead of a GUI to configure workflows is useful because it makes it much easier to integrate with version control tools such as GitHub.\nTo get started with Airflow, we need to install the library, initialize the service, and run the scheduler. To perform these steps, run the following commands on an EC2 instance or your local machine:\nexport AIRFLOW_HOME=~/airflowpip install --user apache-airflowairflow initdbairflow scheduler\nAirflow also provides a web frontend for managing DAGs that have been scheduled. To start this service, run the following command in a new terminal on the same machine.\nairflow webserver -p 8080\nThis command tells Airflow to start the web service on port 8080. You can open a web browser at this port on your machine to view the web frontend for Airflow, as shown in Figure 5.3.\nAirflow comes preloaded with a number of example DAGs. For our model pipeline we’ll create a new DAG and then notify Airflow of the update. We’ll create a file called sklearn.py with the following DAG definition:\nfrom airflow import DAGfrom airflow.operators.bash_operator import BashOperatorfrom datetime import datetime, timedeltadefault_args = { 'owner': 'Airflow', 'depends_on_past': False, 'email': 'bgweber@gmail.com', 'start_date': datetime(2019, 11, 1), 'email_on_failure': True,}dag = DAG('games', default_args=default_args, schedule_interval=\"* * * * *\")t1 = BashOperator( task_id='sklearn_pipeline', bash_command='sudo docker run sklearn_pipeline', dag=dag)\nThere’s a few steps in this Python script to call out. The script uses a Bash operator to define the action to perform. The Bash operator is defined as the last step in the script, which specifies the command to perform. The DAG is instantiated with a number of input arguments that define the workflow settings, such as who to email when the task fails. A cron expression is passed to the DAG object to define the schedule for the task, and the DAG object is passed to the Bash operator to associate the task with this graph of operations.\nBefore adding the DAG to airflow, it’s useful to check for syntax errors in your code. We can run the following command from the terminal to check for issues with the DAG:\npython3 sklearn.py\nThis command will not run the DAG, but will flag any syntax errors present in the script. To update Airflow with the new DAG file, run the following command:\nairflow list_dags-------------------------------------------------------------------DAGS-------------------------------------------------------------------games\nThis command will add the DAG to the list of workflows in Airflow. To view the list of DAGs, navigate to the Airflow web server, as shown in Figure 5.4. The web server will show the schedule of the DAG, and provide a history of past runs of the workflow. To check that the DAG is actually working, browse to the BigQuery UI and check for fresh model outputs.\nWe now have an Airflow service up and running that we can use to monitor the execution of our workflows. This setup enables us to track the execution of workflows, backfill any gaps in data sets, and enable alerting for critical workflows.\nAirflow supports a variety of operations, and many companies author custom operators for internal usage. In our first DAG, we used the Bash operator to define the task to execute, but other options are available for running Docker images, including the Docker operator. The code snippet below shows how to change our DAG to use the Docker operator instead of the Bash operator.\nfrom airflow.operators.docker_operator import DockerOperatort1 = DockerOperator( task_id='sklearn_pipeline', image='sklearn_pipeline', dag=dag)\nThe DAG we defined does not have any dependencies, since the container performs all of the steps in the model pipeline. If we had a dependency, such as running a sklearn_etl container before running the model pipeline, we can use the set_upstrean command as shown below. This configuration sets up two tasks, where the pipeline task will execute after the etl task completes.\nt1 = BashOperator( task_id='sklearn_etl', bash_command='sudo docker run sklearn_etl', dag=dag)t2 = BashOperator( task_id='sklearn_pipeline', bash_command='sudo docker run sklearn_pipeline', dag=dag)t2.set_upstream(t1)\nAirflow provides a rich set of functionality and we’ve only touched the surface of what the tool provides. While we were already able to schedule the model pipeline with hosted and managed cloud offerings, it’s useful to schedule the task through Airflow for improved monitoring and versioning. The landscape of workflow tools will change over time, but many of the concepts of Airflow will translate to these new tools.\nIn this chapter we explored a batch model pipeline for applying a machine learning model to a set of users and storing the results to BigQuery. To make the pipeline portable, so that we can execute it in different environments, we created a Docker image to define the required libraries and credentials for the pipeline. We then ran the pipeline on an EC2 instance using batch commands, cron, and Airflow. We also used GKE and Cloud Composer to run the container via Kubernetes.\nWorkflow tools can be tedious to set up, especially when installing a cluster deployment, but they provide a number of benefits over manual approaches. One of the key benefits is the ability to handle DAG configuration as code, which enables code reviews and version control for workflows. It’s useful to get experience with configuration as code, because it is an introduction to another concept called “infra as code” that we’ll explore in Chapter 10.\nBen Weber is a distinguished data scientist at Zynga. We are hiring!"},"parsed":{"kind":"list like","value":[{"code":null,"e":547,"s":172,"text":"Airflow is becoming the industry standard for authoring data engineering and model pipeline workflows. This chapter of my book explores the process of taking a simple pipeline that runs on a single EC2 instance to a fully-managed Kubernetes ecosystem responsible for scheduling tasks. This posts omits the sections on the fully-managed solutions with GKE and Cloud Composer."},{"code":null,"e":559,"s":547,"text":"leanpub.com"},{"code":null,"e":1083,"s":559,"text":"Model pipelines are usually part of a broader data platform that provides data sources, such as lakes and warehouses, and data stores, such as an application database. When building a pipeline, it’s useful to be able to schedule a task to run, ensure that any dependencies for the pipeline have already completed, and to backfill historic data if needed. While it’s possible to perform these types of tasks manually, there are a variety of tools that have been developed to improve the management of data science workflows."},{"code":null,"e":1711,"s":1083,"text":"In this chapter, we’ll explore a batch model pipeline that performs a sequence of tasks in order to train and store results for a propensity model. This is a different type of task than the deployments we’ve explored so far, which have focused on serving real-time model predictions as a web endpoint. In a batch process, you perform a set of operations that store model results that are later served by a different application. For example, a batch model pipeline may predict which users in a game are likely to churn, and a game server fetches predictions for each user that starts a session and provides personalized offers."},{"code":null,"e":2209,"s":1711,"text":"When building batch model pipelines for production systems, it’s important to make sure that issues with the pipeline are quickly resolved. For example, if the model pipeline is unable to fetch the most recent data for a set of users due to an upstream failure with a database, it’s useful to have a system in place that can send alerts to the team that owns the pipeline and that can rerun portions of the model pipeline in order to resolve any issues with the prerequisite data or model outputs."},{"code":null,"e":2863,"s":2209,"text":"Workflow tools provide a solution for managing these types of problems in model pipelines. With a workflow tool, you specify the operations that need to be completed, identify dependencies between the operations, and then schedule the operations to be performed by the tool. A workflow tool is responsible for running tasks, provisioning resources, and monitoring the status of tasks. There’s a number of open source tools for building workflows including AirFlow, Luigi, MLflow, and Pentaho Kettle. We’ll focus on Airflow, because it is being widely adopted across companies and cloud platforms and are also providing fully-managed versions of Airflow."},{"code":null,"e":3237,"s":2863,"text":"In this chapter, we’ll build a batch model pipeline that runs as a Docker container. Next, we’ll schedule the task to run on an EC2 instance using cron, and then explore a managed version of cron using Kubernetes. In the third section, we’ll use Airflow to define a graph of operations to perform in order to run our model pipeline, and explore a cloud offering of Airflow."},{"code":null,"e":3878,"s":3237,"text":"A common workflow for batch model pipelines is to extract data from a data lake or data warehouse, train a model on historic user behavior, predict future user behavior for more recent data, and then save the results to a data warehouse or application database. In the gaming industry, this is a workflow I’ve seen used for building likelihood to purchase and likelihood to churn models, where the game servers use these predictions to provide different treatments to users based on the model predictions. Usually libraries like sklearn are used to develop models, and languages such as PySpark are used to scale up to the full player base."},{"code":null,"e":4472,"s":3878,"text":"It is typical for model pipelines to require other ETLs to run in a data platform before the pipeline can run on the most recent data. For example, there may be an upstream step in the data platform that translates json strings into schematized events that are used as input for a model. In this situation, it might be necessary to rerun the pipeline on a day that issues occurred with the json transformation process. For this section, we’ll avoid this complication by using a static input data source, but the tools that we’ll explore provide the functionality needed to handle these issues."},{"code":null,"e":4577,"s":4472,"text":"There’s typically two types of batch model pipelines that can I’ve seen deployed in the gaming industry:"},{"code":null,"e":4761,"s":4577,"text":"Persistent: A separate training workflow is used to train models from the one used to build predictions. A model is persisted between training runs and loaded in the serving workflow."},{"code":null,"e":4914,"s":4761,"text":"Transient: The same workflow is used for training and serving predictions, and instead of saving the model as a file, the model is rebuilt for each run."},{"code":null,"e":5263,"s":4914,"text":"In this section we’ll build a transient batch pipeline, where a new model is retrained with each run. This approach generally results in more compute resources being used if the training process is heavyweight, but it helps avoid issues with model drift, which we’ll discuss in Chapter 11. We’ll author a pipeline that performs the following steps:"},{"code":null,"e":5384,"s":5263,"text":"Fetches a dataset from GitHubTrains a logistic regression modelApplies the regression modelSaves the results to BigQuery"},{"code":null,"e":5414,"s":5384,"text":"Fetches a dataset from GitHub"},{"code":null,"e":5449,"s":5414,"text":"Trains a logistic regression model"},{"code":null,"e":5478,"s":5449,"text":"Applies the regression model"},{"code":null,"e":5508,"s":5478,"text":"Saves the results to BigQuery"},{"code":null,"e":5809,"s":5508,"text":"The pipeline will execute as a single Python script that performs all of these steps. For situations where you want to use intermediate outputs from steps across multiple tasks, it’s useful to decompose the pipeline into multiple processes that are integrated through a workflow tool such as Airflow."},{"code":null,"e":6063,"s":5809,"text":"We’ll build this script by first writing a Python script that runs on an EC2 instance, and then Dockerize the script so that we can use the container in workflows. To get started, we need to install a library for writing a Pandas data frame to BigQuery:"},{"code":null,"e":6093,"s":6063,"text":"pip install --user pandas_gbq"},{"code":null,"e":6583,"s":6093,"text":"Next, we’ll create a file called pipeline.py that performs the four pipeline steps identified above.. The script shown below performs these steps by loading the necessary libraries, fetching the CSV file from GitHub into a Pandas data frame, splits the data frame into train and test groups to simulate historic and more recent users, builds a logistic regression model using the training data set, creates predictions for the test data set, and saves the resulting data frame to BigQuery."},{"code":null,"e":7836,"s":6583,"text":"import pandas as pdimport numpy as npfrom google.oauth2 import service_accountfrom sklearn.linear_model import LogisticRegressionfrom datetime import datetimeimport pandas_gbq # fetch the data set and add IDs gamesDF = pd.read_csv(\"https://github.com/bgweber/Twitch/raw/ master/Recommendations/games-expand.csv\")gamesDF['User_ID'] = gamesDF.index gamesDF['New_User'] = np.floor(np.random.randint(0, 10, gamesDF.shape[0])/9)# train and test groups train = gamesDF[gamesDF['New_User'] == 0]x_train = train.iloc[:,0:10]y_train = train['label']test = gameDF[gamesDF['New_User'] == 1]x_test = test.iloc[:,0:10]# build a modelmodel = LogisticRegression()model.fit(x_train, y_train)y_pred = model.predict_proba(x_test)[:, 1]# build a predictions data frameresultDF = pd.DataFrame({'User_ID':test['User_ID'], 'Pred':y_pred}) resultDF['time'] = str(datetime. now())# save predictions to BigQuery table_id = \"dsp_demo.user_scores\"project_id = \"gameanalytics-123\"credentials = service_account.Credentials. from_service_account_file('dsdemo.json')pandas_gbq.to_gbq(resultDF, table_id, project_id=project_id, if_exists = 'replace', credentials=credentials)"},{"code":null,"e":8425,"s":7836,"text":"To simulate a real-world data set, the script assigns a User_ID attribute to each record, which represents a unique ID to track different users in a system. The script also splits users into historic and recent groups by assigning a New_User attribute. After building predictions for each of the recent users, we create a results data frame with the user ID, the model predictIon, and a timestamp. It’s useful to apply timestamps to predictions in order to determine if the pipeline has completed successfully. To test the model pipeline, run the following statements on the command line:"},{"code":null,"e":8518,"s":8425,"text":"export GOOGLE_APPLICATION_CREDENTIALS= /home/ec2-user/dsdemo.jsonpython3 pipeline.py"},{"code":null,"e":8775,"s":8518,"text":"If successfully, the script should create a new data set on BigQuery called dsp_demo, create a new table called user_users, and fill the table with user predictions. To test if data was actually populated in BigQuery, run the following commands in Jupyter:"},{"code":null,"e":8916,"s":8775,"text":"from google.cloud import bigqueryclient = bigquery.Client()sql = \"select * from dsp_demo.user_scores\"client.query(sql).to_dataframe().head()"},{"code":null,"e":9267,"s":8916,"text":"This script will set up a client for connecting to BigQuery and then display the result set of the query submitted to BigQuery. You can also browse to the BigQuery web UI to inspect the results of the pipeline, as shown in Figure 5.1. We now have a script that can fetch data, apply a machine learning model, and save the results as a single process."},{"code":null,"e":9871,"s":9267,"text":"With many workflow tools, you can run Python code or bash scripts directly, but it’s good to set up isolated environments for executing scripts in order to avoid dependency conflicts for different libraries and runtimes. Luckily, we explored a tool for this in Chapter 4 and can use Docker with workflow tools. It’s useful to wrap Python scripts in Docker for workflow tools, because you can add libraries that may not be installed on the system responsible for scheduling, you can avoid issues with Python version conflicts, and containers are becoming a common way of defining tasks in workflow tools."},{"code":null,"e":10579,"s":9871,"text":"To containerize our workflow, we need to define a Dockerfile, as shown below. Since we are building out a new Python environment from scratch, we’ll need to install Pandas, sklearn, and the BigQuery library. We also need to copy credentials from the EC2 instance into the container so that we can run the export command for authenticating with GCP. This works for short term deployments, but for longer running containers it’s better to run the export in the instantiated container rather than copying static credentials into images. The Dockerfile lists out the Python libraries needed to run the script, copies in the local files needed for execution, exports credentials, and specifies the script to run."},{"code":null,"e":11006,"s":10579,"text":"FROM ubuntu:latestMAINTAINER Ben Weber RUN apt-get update \\ && apt-get install -y python3-pip python3-dev \\ && cd /usr/local/bin \\ && ln -s /usr/bin/python3 python \\ && pip3 install pandas \\ && pip3 install sklearn \\ && pip3 install pandas_gbq COPY pipeline.py pipeline.py COPY /home/ec2-user/dsdemo.json dsdemo.jsonRUN export GOOGLE_APPLICATION_CREDENTIALS=/dsdemo.jsonENTRYPOINT [\"python3\",\"pipeline.py\"]"},{"code":null,"e":11255,"s":11006,"text":"Before deploying this script to production, we need to build an image from the script and test a sample run. The commands below show how to build an image from the Dockerfile, list the Docker images, and run an instance of the model pipeline image."},{"code":null,"e":11353,"s":11255,"text":"sudo docker image build -t \"sklearn_pipeline\" .sudo docker imagessudo docker run sklearn_pipeline"},{"code":null,"e":11723,"s":11353,"text":"After running the last command, the containerized pipeline should update the model predictions in BigQuery. We now have a model pipeline that we can run as a single bash command, which we now need to schedule to run at a specific frequency. For testing purposes, we’ll run the script every minute, but in practice models are typically executed hourly, daily, or weekly."},{"code":null,"e":12214,"s":11723,"text":"A common requirement for model pipelines is running a task at a regular frequency, such as every day or every hour. Cron is a utility that provides scheduling functionality for machines running the Linux operating system. You can Set up a scheduled task using the crontab utility and assign a cron expression that defines how frequently to run the command. Cron jobs run directly on the machine where cron is utilized, and can make use of the runtimes and libraries installed on the system."},{"code":null,"e":12719,"s":12214,"text":"There are a number of challenges with using cron in production-grade systems, but it’s a great way to get started with scheduling a small number of tasks and it’s good to learn the cron expression syntax that is used in many scheduling systems. The main issue with the cron utility is that it runs on a single machine, and does not natively integrate with tools such as version control. If your machine goes down, then you’ll need to recreate your environment and update your cron table on a new machine."},{"code":null,"e":12995,"s":12719,"text":"A cron expression defines how frequently to run a command. It is a sequence of 5 numbers that define when to execute for different time granularities, and it can include wildcards to always run for certain time periods. A few sample expresions are shown in the snippet below:"},{"code":null,"e":13100,"s":12995,"text":"# run every minute * * * * * # Run at 10am UTC everyday0 10 * * * # Run at 04:15 on Saturday15 4 * * 6"},{"code":null,"e":13257,"s":13100,"text":"When getting started with cron, it’s good to use tools to validate your expressions. Cron expressions are used in Airflow and many other scheduling systems."},{"code":null,"e":13407,"s":13257,"text":"We can use cron to schedule our model pipeline to run on a regular frequency. To schedule a command to run, run the following command on the console:"},{"code":null,"e":13418,"s":13407,"text":"crontab -e"},{"code":null,"e":13578,"s":13418,"text":"This command will open up the cron table file for editing in vi. To schedule the pipeline to run every minute, add the following commands to the file and save."},{"code":null,"e":13640,"s":13578,"text":"# run every minute * * * * * sudo docker run sklearn_pipeline"},{"code":null,"e":14054,"s":13640,"text":"After exiting the editor, the cron table will be updated with the new command to run. The second part of the cron statement is the command to run. when defining the command to run, it’s useful to include full file paths. With Docker, we just need to define the image to run. To check that the script is actually executing, browse to the BigQuery UI and check the time column on the user_scores model output table."},{"code":null,"e":14305,"s":14054,"text":"We now have a utility for scheduling our model pipeline on a regular schedule. However, if the machine goes down then our pipeline will fail to execute. To handle this situation, it’s good to explore cloud offerings with cron scheduling capabilities."},{"code":null,"e":14725,"s":14305,"text":"Cron is useful for simple pipelines, but runs into challenges when tasks have dependencies on other tasks which can fail. To help resolve this issue, where tasks have dependencies and only portions of a pipeline need to be rerun, we can leverage workflow tools. Apache Airflow is currently the most popular tool, but other open source projects are available and provide similar functionality including Luigi and MLflow."},{"code":null,"e":14816,"s":14725,"text":"There are a few situations where workflow tools provide benefits over using cron directly:"},{"code":null,"e":14909,"s":14816,"text":"Dependencies: Workflow tools define graphs of operations, which makes dependencies explicit."},{"code":null,"e":14999,"s":14909,"text":"Backfills: It may be necessary to run an ETL on old data, for a range of different dates."},{"code":null,"e":15088,"s":14999,"text":"Versioning: Most workflow tools integrate with version control systems to manage graphs."},{"code":null,"e":15179,"s":15088,"text":"Alerting: These tools can send out emails or generate PageDuty alerts when failures occur."},{"code":null,"e":15432,"s":15179,"text":"Workflow tools are particularly useful in environments where different teams are scheduling tasks. For example, many game companies have data scientists that schedule model pipelines which are dependent on ETLs scheduled by a seperate engineering team."},{"code":null,"e":15578,"s":15432,"text":"In this section, we’ll schedule our task to run an EC2 instance using hosted Airflow, and then explore a fully-managed version of Airflow on GCP."},{"code":null,"e":15925,"s":15578,"text":"Airflow is an open source workflow tool that was originally developed by Airbnb and publically released in 2015. It helps solve a challenge that many companies face, which is scheduling tasks that have many dependencies. One of the core concepts in this tool is a graph that defines the tasks to perform and the relationships between these tasks."},{"code":null,"e":16223,"s":15925,"text":"In Airflow, a graph is referred to as a DAG, which is an acronym for directed acyclic graph. A DAG is a set of tasks to perform, where each task has zero or more upstream dependencies. One of the constraints is that cycles are not allowed, where two tasks have upstream dependencies on each other."},{"code":null,"e":16649,"s":16223,"text":"DAGs are set up using Python code, which is one of the differences from other workflow tools such as Pentaho Kettle which is GUI focused. The Airflow approach is called “configuration as code”, because a Python script defines the operations to perform within a workflow graph. Using code instead of a GUI to configure workflows is useful because it makes it much easier to integrate with version control tools such as GitHub."},{"code":null,"e":16850,"s":16649,"text":"To get started with Airflow, we need to install the library, initialize the service, and run the scheduler. To perform these steps, run the following commands on an EC2 instance or your local machine:"},{"code":null,"e":16944,"s":16850,"text":"export AIRFLOW_HOME=~/airflowpip install --user apache-airflowairflow initdbairflow scheduler"},{"code":null,"e":17113,"s":16944,"text":"Airflow also provides a web frontend for managing DAGs that have been scheduled. To start this service, run the following command in a new terminal on the same machine."},{"code":null,"e":17139,"s":17113,"text":"airflow webserver -p 8080"},{"code":null,"e":17323,"s":17139,"text":"This command tells Airflow to start the web service on port 8080. You can open a web browser at this port on your machine to view the web frontend for Airflow, as shown in Figure 5.3."},{"code":null,"e":17536,"s":17323,"text":"Airflow comes preloaded with a number of example DAGs. For our model pipeline we’ll create a new DAG and then notify Airflow of the update. We’ll create a file called sklearn.py with the following DAG definition:"},{"code":null,"e":18018,"s":17536,"text":"from airflow import DAGfrom airflow.operators.bash_operator import BashOperatorfrom datetime import datetime, timedeltadefault_args = { 'owner': 'Airflow', 'depends_on_past': False, 'email': 'bgweber@gmail.com', 'start_date': datetime(2019, 11, 1), 'email_on_failure': True,}dag = DAG('games', default_args=default_args, schedule_interval=\"* * * * *\")t1 = BashOperator( task_id='sklearn_pipeline', bash_command='sudo docker run sklearn_pipeline', dag=dag)"},{"code":null,"e":18559,"s":18018,"text":"There’s a few steps in this Python script to call out. The script uses a Bash operator to define the action to perform. The Bash operator is defined as the last step in the script, which specifies the command to perform. The DAG is instantiated with a number of input arguments that define the workflow settings, such as who to email when the task fails. A cron expression is passed to the DAG object to define the schedule for the task, and the DAG object is passed to the Bash operator to associate the task with this graph of operations."},{"code":null,"e":18731,"s":18559,"text":"Before adding the DAG to airflow, it’s useful to check for syntax errors in your code. We can run the following command from the terminal to check for issues with the DAG:"},{"code":null,"e":18750,"s":18731,"text":"python3 sklearn.py"},{"code":null,"e":18908,"s":18750,"text":"This command will not run the DAG, but will flag any syntax errors present in the script. To update Airflow with the new DAG file, run the following command:"},{"code":null,"e":19069,"s":18908,"text":"airflow list_dags-------------------------------------------------------------------DAGS-------------------------------------------------------------------games"},{"code":null,"e":19428,"s":19069,"text":"This command will add the DAG to the list of workflows in Airflow. To view the list of DAGs, navigate to the Airflow web server, as shown in Figure 5.4. The web server will show the schedule of the DAG, and provide a history of past runs of the workflow. To check that the DAG is actually working, browse to the BigQuery UI and check for fresh model outputs."},{"code":null,"e":19668,"s":19428,"text":"We now have an Airflow service up and running that we can use to monitor the execution of our workflows. This setup enables us to track the execution of workflows, backfill any gaps in data sets, and enable alerting for critical workflows."},{"code":null,"e":20046,"s":19668,"text":"Airflow supports a variety of operations, and many companies author custom operators for internal usage. In our first DAG, we used the Bash operator to define the task to execute, but other options are available for running Docker images, including the Docker operator. The code snippet below shows how to change our DAG to use the Docker operator instead of the Bash operator."},{"code":null,"e":20199,"s":20046,"text":"from airflow.operators.docker_operator import DockerOperatort1 = DockerOperator( task_id='sklearn_pipeline', image='sklearn_pipeline', dag=dag)"},{"code":null,"e":20575,"s":20199,"text":"The DAG we defined does not have any dependencies, since the container performs all of the steps in the model pipeline. If we had a dependency, such as running a sklearn_etl container before running the model pipeline, we can use the set_upstrean command as shown below. This configuration sets up two tasks, where the pipeline task will execute after the etl task completes."},{"code":null,"e":20811,"s":20575,"text":"t1 = BashOperator( task_id='sklearn_etl', bash_command='sudo docker run sklearn_etl', dag=dag)t2 = BashOperator( task_id='sklearn_pipeline', bash_command='sudo docker run sklearn_pipeline', dag=dag)t2.set_upstream(t1)"},{"code":null,"e":21232,"s":20811,"text":"Airflow provides a rich set of functionality and we’ve only touched the surface of what the tool provides. While we were already able to schedule the model pipeline with hosted and managed cloud offerings, it’s useful to schedule the task through Airflow for improved monitoring and versioning. The landscape of workflow tools will change over time, but many of the concepts of Airflow will translate to these new tools."},{"code":null,"e":21711,"s":21232,"text":"In this chapter we explored a batch model pipeline for applying a machine learning model to a set of users and storing the results to BigQuery. To make the pipeline portable, so that we can execute it in different environments, we created a Docker image to define the required libraries and credentials for the pipeline. We then ran the pipeline on an EC2 instance using batch commands, cron, and Airflow. We also used GKE and Cloud Composer to run the container via Kubernetes."},{"code":null,"e":22165,"s":21711,"text":"Workflow tools can be tedious to set up, especially when installing a cluster deployment, but they provide a number of benefits over manual approaches. One of the key benefits is the ability to handle DAG configuration as code, which enables code reviews and version control for workflows. It’s useful to get experience with configuration as code, because it is an introduction to another concept called “infra as code” that we’ll explore in Chapter 10."}],"string":"[\n {\n \"code\": null,\n \"e\": 547,\n \"s\": 172,\n \"text\": \"Airflow is becoming the industry standard for authoring data engineering and model pipeline workflows. This chapter of my book explores the process of taking a simple pipeline that runs on a single EC2 instance to a fully-managed Kubernetes ecosystem responsible for scheduling tasks. This posts omits the sections on the fully-managed solutions with GKE and Cloud Composer.\"\n },\n {\n \"code\": null,\n \"e\": 559,\n \"s\": 547,\n \"text\": \"leanpub.com\"\n },\n {\n \"code\": null,\n \"e\": 1083,\n \"s\": 559,\n \"text\": \"Model pipelines are usually part of a broader data platform that provides data sources, such as lakes and warehouses, and data stores, such as an application database. When building a pipeline, it’s useful to be able to schedule a task to run, ensure that any dependencies for the pipeline have already completed, and to backfill historic data if needed. While it’s possible to perform these types of tasks manually, there are a variety of tools that have been developed to improve the management of data science workflows.\"\n },\n {\n \"code\": null,\n \"e\": 1711,\n \"s\": 1083,\n \"text\": \"In this chapter, we’ll explore a batch model pipeline that performs a sequence of tasks in order to train and store results for a propensity model. This is a different type of task than the deployments we’ve explored so far, which have focused on serving real-time model predictions as a web endpoint. In a batch process, you perform a set of operations that store model results that are later served by a different application. For example, a batch model pipeline may predict which users in a game are likely to churn, and a game server fetches predictions for each user that starts a session and provides personalized offers.\"\n },\n {\n \"code\": null,\n \"e\": 2209,\n \"s\": 1711,\n \"text\": \"When building batch model pipelines for production systems, it’s important to make sure that issues with the pipeline are quickly resolved. For example, if the model pipeline is unable to fetch the most recent data for a set of users due to an upstream failure with a database, it’s useful to have a system in place that can send alerts to the team that owns the pipeline and that can rerun portions of the model pipeline in order to resolve any issues with the prerequisite data or model outputs.\"\n },\n {\n \"code\": null,\n \"e\": 2863,\n \"s\": 2209,\n \"text\": \"Workflow tools provide a solution for managing these types of problems in model pipelines. With a workflow tool, you specify the operations that need to be completed, identify dependencies between the operations, and then schedule the operations to be performed by the tool. A workflow tool is responsible for running tasks, provisioning resources, and monitoring the status of tasks. There’s a number of open source tools for building workflows including AirFlow, Luigi, MLflow, and Pentaho Kettle. We’ll focus on Airflow, because it is being widely adopted across companies and cloud platforms and are also providing fully-managed versions of Airflow.\"\n },\n {\n \"code\": null,\n \"e\": 3237,\n \"s\": 2863,\n \"text\": \"In this chapter, we’ll build a batch model pipeline that runs as a Docker container. Next, we’ll schedule the task to run on an EC2 instance using cron, and then explore a managed version of cron using Kubernetes. In the third section, we’ll use Airflow to define a graph of operations to perform in order to run our model pipeline, and explore a cloud offering of Airflow.\"\n },\n {\n \"code\": null,\n \"e\": 3878,\n \"s\": 3237,\n \"text\": \"A common workflow for batch model pipelines is to extract data from a data lake or data warehouse, train a model on historic user behavior, predict future user behavior for more recent data, and then save the results to a data warehouse or application database. In the gaming industry, this is a workflow I’ve seen used for building likelihood to purchase and likelihood to churn models, where the game servers use these predictions to provide different treatments to users based on the model predictions. Usually libraries like sklearn are used to develop models, and languages such as PySpark are used to scale up to the full player base.\"\n },\n {\n \"code\": null,\n \"e\": 4472,\n \"s\": 3878,\n \"text\": \"It is typical for model pipelines to require other ETLs to run in a data platform before the pipeline can run on the most recent data. For example, there may be an upstream step in the data platform that translates json strings into schematized events that are used as input for a model. In this situation, it might be necessary to rerun the pipeline on a day that issues occurred with the json transformation process. For this section, we’ll avoid this complication by using a static input data source, but the tools that we’ll explore provide the functionality needed to handle these issues.\"\n },\n {\n \"code\": null,\n \"e\": 4577,\n \"s\": 4472,\n \"text\": \"There’s typically two types of batch model pipelines that can I’ve seen deployed in the gaming industry:\"\n },\n {\n \"code\": null,\n \"e\": 4761,\n \"s\": 4577,\n \"text\": \"Persistent: A separate training workflow is used to train models from the one used to build predictions. A model is persisted between training runs and loaded in the serving workflow.\"\n },\n {\n \"code\": null,\n \"e\": 4914,\n \"s\": 4761,\n \"text\": \"Transient: The same workflow is used for training and serving predictions, and instead of saving the model as a file, the model is rebuilt for each run.\"\n },\n {\n \"code\": null,\n \"e\": 5263,\n \"s\": 4914,\n \"text\": \"In this section we’ll build a transient batch pipeline, where a new model is retrained with each run. This approach generally results in more compute resources being used if the training process is heavyweight, but it helps avoid issues with model drift, which we’ll discuss in Chapter 11. We’ll author a pipeline that performs the following steps:\"\n },\n {\n \"code\": null,\n \"e\": 5384,\n \"s\": 5263,\n \"text\": \"Fetches a dataset from GitHubTrains a logistic regression modelApplies the regression modelSaves the results to BigQuery\"\n },\n {\n \"code\": null,\n \"e\": 5414,\n \"s\": 5384,\n \"text\": \"Fetches a dataset from GitHub\"\n },\n {\n \"code\": null,\n \"e\": 5449,\n \"s\": 5414,\n \"text\": \"Trains a logistic regression model\"\n },\n {\n \"code\": null,\n \"e\": 5478,\n \"s\": 5449,\n \"text\": \"Applies the regression model\"\n },\n {\n \"code\": null,\n \"e\": 5508,\n \"s\": 5478,\n \"text\": \"Saves the results to BigQuery\"\n },\n {\n \"code\": null,\n \"e\": 5809,\n \"s\": 5508,\n \"text\": \"The pipeline will execute as a single Python script that performs all of these steps. For situations where you want to use intermediate outputs from steps across multiple tasks, it’s useful to decompose the pipeline into multiple processes that are integrated through a workflow tool such as Airflow.\"\n },\n {\n \"code\": null,\n \"e\": 6063,\n \"s\": 5809,\n \"text\": \"We’ll build this script by first writing a Python script that runs on an EC2 instance, and then Dockerize the script so that we can use the container in workflows. To get started, we need to install a library for writing a Pandas data frame to BigQuery:\"\n },\n {\n \"code\": null,\n \"e\": 6093,\n \"s\": 6063,\n \"text\": \"pip install --user pandas_gbq\"\n },\n {\n \"code\": null,\n \"e\": 6583,\n \"s\": 6093,\n \"text\": \"Next, we’ll create a file called pipeline.py that performs the four pipeline steps identified above.. The script shown below performs these steps by loading the necessary libraries, fetching the CSV file from GitHub into a Pandas data frame, splits the data frame into train and test groups to simulate historic and more recent users, builds a logistic regression model using the training data set, creates predictions for the test data set, and saves the resulting data frame to BigQuery.\"\n },\n {\n \"code\": null,\n \"e\": 7836,\n \"s\": 6583,\n \"text\": \"import pandas as pdimport numpy as npfrom google.oauth2 import service_accountfrom sklearn.linear_model import LogisticRegressionfrom datetime import datetimeimport pandas_gbq # fetch the data set and add IDs gamesDF = pd.read_csv(\\\"https://github.com/bgweber/Twitch/raw/ master/Recommendations/games-expand.csv\\\")gamesDF['User_ID'] = gamesDF.index gamesDF['New_User'] = np.floor(np.random.randint(0, 10, gamesDF.shape[0])/9)# train and test groups train = gamesDF[gamesDF['New_User'] == 0]x_train = train.iloc[:,0:10]y_train = train['label']test = gameDF[gamesDF['New_User'] == 1]x_test = test.iloc[:,0:10]# build a modelmodel = LogisticRegression()model.fit(x_train, y_train)y_pred = model.predict_proba(x_test)[:, 1]# build a predictions data frameresultDF = pd.DataFrame({'User_ID':test['User_ID'], 'Pred':y_pred}) resultDF['time'] = str(datetime. now())# save predictions to BigQuery table_id = \\\"dsp_demo.user_scores\\\"project_id = \\\"gameanalytics-123\\\"credentials = service_account.Credentials. from_service_account_file('dsdemo.json')pandas_gbq.to_gbq(resultDF, table_id, project_id=project_id, if_exists = 'replace', credentials=credentials)\"\n },\n {\n \"code\": null,\n \"e\": 8425,\n \"s\": 7836,\n \"text\": \"To simulate a real-world data set, the script assigns a User_ID attribute to each record, which represents a unique ID to track different users in a system. The script also splits users into historic and recent groups by assigning a New_User attribute. After building predictions for each of the recent users, we create a results data frame with the user ID, the model predictIon, and a timestamp. It’s useful to apply timestamps to predictions in order to determine if the pipeline has completed successfully. To test the model pipeline, run the following statements on the command line:\"\n },\n {\n \"code\": null,\n \"e\": 8518,\n \"s\": 8425,\n \"text\": \"export GOOGLE_APPLICATION_CREDENTIALS= /home/ec2-user/dsdemo.jsonpython3 pipeline.py\"\n },\n {\n \"code\": null,\n \"e\": 8775,\n \"s\": 8518,\n \"text\": \"If successfully, the script should create a new data set on BigQuery called dsp_demo, create a new table called user_users, and fill the table with user predictions. To test if data was actually populated in BigQuery, run the following commands in Jupyter:\"\n },\n {\n \"code\": null,\n \"e\": 8916,\n \"s\": 8775,\n \"text\": \"from google.cloud import bigqueryclient = bigquery.Client()sql = \\\"select * from dsp_demo.user_scores\\\"client.query(sql).to_dataframe().head()\"\n },\n {\n \"code\": null,\n \"e\": 9267,\n \"s\": 8916,\n \"text\": \"This script will set up a client for connecting to BigQuery and then display the result set of the query submitted to BigQuery. You can also browse to the BigQuery web UI to inspect the results of the pipeline, as shown in Figure 5.1. We now have a script that can fetch data, apply a machine learning model, and save the results as a single process.\"\n },\n {\n \"code\": null,\n \"e\": 9871,\n \"s\": 9267,\n \"text\": \"With many workflow tools, you can run Python code or bash scripts directly, but it’s good to set up isolated environments for executing scripts in order to avoid dependency conflicts for different libraries and runtimes. Luckily, we explored a tool for this in Chapter 4 and can use Docker with workflow tools. It’s useful to wrap Python scripts in Docker for workflow tools, because you can add libraries that may not be installed on the system responsible for scheduling, you can avoid issues with Python version conflicts, and containers are becoming a common way of defining tasks in workflow tools.\"\n },\n {\n \"code\": null,\n \"e\": 10579,\n \"s\": 9871,\n \"text\": \"To containerize our workflow, we need to define a Dockerfile, as shown below. Since we are building out a new Python environment from scratch, we’ll need to install Pandas, sklearn, and the BigQuery library. We also need to copy credentials from the EC2 instance into the container so that we can run the export command for authenticating with GCP. This works for short term deployments, but for longer running containers it’s better to run the export in the instantiated container rather than copying static credentials into images. The Dockerfile lists out the Python libraries needed to run the script, copies in the local files needed for execution, exports credentials, and specifies the script to run.\"\n },\n {\n \"code\": null,\n \"e\": 11006,\n \"s\": 10579,\n \"text\": \"FROM ubuntu:latestMAINTAINER Ben Weber RUN apt-get update \\\\ && apt-get install -y python3-pip python3-dev \\\\ && cd /usr/local/bin \\\\ && ln -s /usr/bin/python3 python \\\\ && pip3 install pandas \\\\ && pip3 install sklearn \\\\ && pip3 install pandas_gbq COPY pipeline.py pipeline.py COPY /home/ec2-user/dsdemo.json dsdemo.jsonRUN export GOOGLE_APPLICATION_CREDENTIALS=/dsdemo.jsonENTRYPOINT [\\\"python3\\\",\\\"pipeline.py\\\"]\"\n },\n {\n \"code\": null,\n \"e\": 11255,\n \"s\": 11006,\n \"text\": \"Before deploying this script to production, we need to build an image from the script and test a sample run. The commands below show how to build an image from the Dockerfile, list the Docker images, and run an instance of the model pipeline image.\"\n },\n {\n \"code\": null,\n \"e\": 11353,\n \"s\": 11255,\n \"text\": \"sudo docker image build -t \\\"sklearn_pipeline\\\" .sudo docker imagessudo docker run sklearn_pipeline\"\n },\n {\n \"code\": null,\n \"e\": 11723,\n \"s\": 11353,\n \"text\": \"After running the last command, the containerized pipeline should update the model predictions in BigQuery. We now have a model pipeline that we can run as a single bash command, which we now need to schedule to run at a specific frequency. For testing purposes, we’ll run the script every minute, but in practice models are typically executed hourly, daily, or weekly.\"\n },\n {\n \"code\": null,\n \"e\": 12214,\n \"s\": 11723,\n \"text\": \"A common requirement for model pipelines is running a task at a regular frequency, such as every day or every hour. Cron is a utility that provides scheduling functionality for machines running the Linux operating system. You can Set up a scheduled task using the crontab utility and assign a cron expression that defines how frequently to run the command. Cron jobs run directly on the machine where cron is utilized, and can make use of the runtimes and libraries installed on the system.\"\n },\n {\n \"code\": null,\n \"e\": 12719,\n \"s\": 12214,\n \"text\": \"There are a number of challenges with using cron in production-grade systems, but it’s a great way to get started with scheduling a small number of tasks and it’s good to learn the cron expression syntax that is used in many scheduling systems. The main issue with the cron utility is that it runs on a single machine, and does not natively integrate with tools such as version control. If your machine goes down, then you’ll need to recreate your environment and update your cron table on a new machine.\"\n },\n {\n \"code\": null,\n \"e\": 12995,\n \"s\": 12719,\n \"text\": \"A cron expression defines how frequently to run a command. It is a sequence of 5 numbers that define when to execute for different time granularities, and it can include wildcards to always run for certain time periods. A few sample expresions are shown in the snippet below:\"\n },\n {\n \"code\": null,\n \"e\": 13100,\n \"s\": 12995,\n \"text\": \"# run every minute * * * * * # Run at 10am UTC everyday0 10 * * * # Run at 04:15 on Saturday15 4 * * 6\"\n },\n {\n \"code\": null,\n \"e\": 13257,\n \"s\": 13100,\n \"text\": \"When getting started with cron, it’s good to use tools to validate your expressions. Cron expressions are used in Airflow and many other scheduling systems.\"\n },\n {\n \"code\": null,\n \"e\": 13407,\n \"s\": 13257,\n \"text\": \"We can use cron to schedule our model pipeline to run on a regular frequency. To schedule a command to run, run the following command on the console:\"\n },\n {\n \"code\": null,\n \"e\": 13418,\n \"s\": 13407,\n \"text\": \"crontab -e\"\n },\n {\n \"code\": null,\n \"e\": 13578,\n \"s\": 13418,\n \"text\": \"This command will open up the cron table file for editing in vi. To schedule the pipeline to run every minute, add the following commands to the file and save.\"\n },\n {\n \"code\": null,\n \"e\": 13640,\n \"s\": 13578,\n \"text\": \"# run every minute * * * * * sudo docker run sklearn_pipeline\"\n },\n {\n \"code\": null,\n \"e\": 14054,\n \"s\": 13640,\n \"text\": \"After exiting the editor, the cron table will be updated with the new command to run. The second part of the cron statement is the command to run. when defining the command to run, it’s useful to include full file paths. With Docker, we just need to define the image to run. To check that the script is actually executing, browse to the BigQuery UI and check the time column on the user_scores model output table.\"\n },\n {\n \"code\": null,\n \"e\": 14305,\n \"s\": 14054,\n \"text\": \"We now have a utility for scheduling our model pipeline on a regular schedule. However, if the machine goes down then our pipeline will fail to execute. To handle this situation, it’s good to explore cloud offerings with cron scheduling capabilities.\"\n },\n {\n \"code\": null,\n \"e\": 14725,\n \"s\": 14305,\n \"text\": \"Cron is useful for simple pipelines, but runs into challenges when tasks have dependencies on other tasks which can fail. To help resolve this issue, where tasks have dependencies and only portions of a pipeline need to be rerun, we can leverage workflow tools. Apache Airflow is currently the most popular tool, but other open source projects are available and provide similar functionality including Luigi and MLflow.\"\n },\n {\n \"code\": null,\n \"e\": 14816,\n \"s\": 14725,\n \"text\": \"There are a few situations where workflow tools provide benefits over using cron directly:\"\n },\n {\n \"code\": null,\n \"e\": 14909,\n \"s\": 14816,\n \"text\": \"Dependencies: Workflow tools define graphs of operations, which makes dependencies explicit.\"\n },\n {\n \"code\": null,\n \"e\": 14999,\n \"s\": 14909,\n \"text\": \"Backfills: It may be necessary to run an ETL on old data, for a range of different dates.\"\n },\n {\n \"code\": null,\n \"e\": 15088,\n \"s\": 14999,\n \"text\": \"Versioning: Most workflow tools integrate with version control systems to manage graphs.\"\n },\n {\n \"code\": null,\n \"e\": 15179,\n \"s\": 15088,\n \"text\": \"Alerting: These tools can send out emails or generate PageDuty alerts when failures occur.\"\n },\n {\n \"code\": null,\n \"e\": 15432,\n \"s\": 15179,\n \"text\": \"Workflow tools are particularly useful in environments where different teams are scheduling tasks. For example, many game companies have data scientists that schedule model pipelines which are dependent on ETLs scheduled by a seperate engineering team.\"\n },\n {\n \"code\": null,\n \"e\": 15578,\n \"s\": 15432,\n \"text\": \"In this section, we’ll schedule our task to run an EC2 instance using hosted Airflow, and then explore a fully-managed version of Airflow on GCP.\"\n },\n {\n \"code\": null,\n \"e\": 15925,\n \"s\": 15578,\n \"text\": \"Airflow is an open source workflow tool that was originally developed by Airbnb and publically released in 2015. It helps solve a challenge that many companies face, which is scheduling tasks that have many dependencies. One of the core concepts in this tool is a graph that defines the tasks to perform and the relationships between these tasks.\"\n },\n {\n \"code\": null,\n \"e\": 16223,\n \"s\": 15925,\n \"text\": \"In Airflow, a graph is referred to as a DAG, which is an acronym for directed acyclic graph. A DAG is a set of tasks to perform, where each task has zero or more upstream dependencies. One of the constraints is that cycles are not allowed, where two tasks have upstream dependencies on each other.\"\n },\n {\n \"code\": null,\n \"e\": 16649,\n \"s\": 16223,\n \"text\": \"DAGs are set up using Python code, which is one of the differences from other workflow tools such as Pentaho Kettle which is GUI focused. The Airflow approach is called “configuration as code”, because a Python script defines the operations to perform within a workflow graph. Using code instead of a GUI to configure workflows is useful because it makes it much easier to integrate with version control tools such as GitHub.\"\n },\n {\n \"code\": null,\n \"e\": 16850,\n \"s\": 16649,\n \"text\": \"To get started with Airflow, we need to install the library, initialize the service, and run the scheduler. To perform these steps, run the following commands on an EC2 instance or your local machine:\"\n },\n {\n \"code\": null,\n \"e\": 16944,\n \"s\": 16850,\n \"text\": \"export AIRFLOW_HOME=~/airflowpip install --user apache-airflowairflow initdbairflow scheduler\"\n },\n {\n \"code\": null,\n \"e\": 17113,\n \"s\": 16944,\n \"text\": \"Airflow also provides a web frontend for managing DAGs that have been scheduled. To start this service, run the following command in a new terminal on the same machine.\"\n },\n {\n \"code\": null,\n \"e\": 17139,\n \"s\": 17113,\n \"text\": \"airflow webserver -p 8080\"\n },\n {\n \"code\": null,\n \"e\": 17323,\n \"s\": 17139,\n \"text\": \"This command tells Airflow to start the web service on port 8080. You can open a web browser at this port on your machine to view the web frontend for Airflow, as shown in Figure 5.3.\"\n },\n {\n \"code\": null,\n \"e\": 17536,\n \"s\": 17323,\n \"text\": \"Airflow comes preloaded with a number of example DAGs. For our model pipeline we’ll create a new DAG and then notify Airflow of the update. We’ll create a file called sklearn.py with the following DAG definition:\"\n },\n {\n \"code\": null,\n \"e\": 18018,\n \"s\": 17536,\n \"text\": \"from airflow import DAGfrom airflow.operators.bash_operator import BashOperatorfrom datetime import datetime, timedeltadefault_args = { 'owner': 'Airflow', 'depends_on_past': False, 'email': 'bgweber@gmail.com', 'start_date': datetime(2019, 11, 1), 'email_on_failure': True,}dag = DAG('games', default_args=default_args, schedule_interval=\\\"* * * * *\\\")t1 = BashOperator( task_id='sklearn_pipeline', bash_command='sudo docker run sklearn_pipeline', dag=dag)\"\n },\n {\n \"code\": null,\n \"e\": 18559,\n \"s\": 18018,\n \"text\": \"There’s a few steps in this Python script to call out. The script uses a Bash operator to define the action to perform. The Bash operator is defined as the last step in the script, which specifies the command to perform. The DAG is instantiated with a number of input arguments that define the workflow settings, such as who to email when the task fails. A cron expression is passed to the DAG object to define the schedule for the task, and the DAG object is passed to the Bash operator to associate the task with this graph of operations.\"\n },\n {\n \"code\": null,\n \"e\": 18731,\n \"s\": 18559,\n \"text\": \"Before adding the DAG to airflow, it’s useful to check for syntax errors in your code. We can run the following command from the terminal to check for issues with the DAG:\"\n },\n {\n \"code\": null,\n \"e\": 18750,\n \"s\": 18731,\n \"text\": \"python3 sklearn.py\"\n },\n {\n \"code\": null,\n \"e\": 18908,\n \"s\": 18750,\n \"text\": \"This command will not run the DAG, but will flag any syntax errors present in the script. To update Airflow with the new DAG file, run the following command:\"\n },\n {\n \"code\": null,\n \"e\": 19069,\n \"s\": 18908,\n \"text\": \"airflow list_dags-------------------------------------------------------------------DAGS-------------------------------------------------------------------games\"\n },\n {\n \"code\": null,\n \"e\": 19428,\n \"s\": 19069,\n \"text\": \"This command will add the DAG to the list of workflows in Airflow. To view the list of DAGs, navigate to the Airflow web server, as shown in Figure 5.4. The web server will show the schedule of the DAG, and provide a history of past runs of the workflow. To check that the DAG is actually working, browse to the BigQuery UI and check for fresh model outputs.\"\n },\n {\n \"code\": null,\n \"e\": 19668,\n \"s\": 19428,\n \"text\": \"We now have an Airflow service up and running that we can use to monitor the execution of our workflows. This setup enables us to track the execution of workflows, backfill any gaps in data sets, and enable alerting for critical workflows.\"\n },\n {\n \"code\": null,\n \"e\": 20046,\n \"s\": 19668,\n \"text\": \"Airflow supports a variety of operations, and many companies author custom operators for internal usage. In our first DAG, we used the Bash operator to define the task to execute, but other options are available for running Docker images, including the Docker operator. The code snippet below shows how to change our DAG to use the Docker operator instead of the Bash operator.\"\n },\n {\n \"code\": null,\n \"e\": 20199,\n \"s\": 20046,\n \"text\": \"from airflow.operators.docker_operator import DockerOperatort1 = DockerOperator( task_id='sklearn_pipeline', image='sklearn_pipeline', dag=dag)\"\n },\n {\n \"code\": null,\n \"e\": 20575,\n \"s\": 20199,\n \"text\": \"The DAG we defined does not have any dependencies, since the container performs all of the steps in the model pipeline. If we had a dependency, such as running a sklearn_etl container before running the model pipeline, we can use the set_upstrean command as shown below. This configuration sets up two tasks, where the pipeline task will execute after the etl task completes.\"\n },\n {\n \"code\": null,\n \"e\": 20811,\n \"s\": 20575,\n \"text\": \"t1 = BashOperator( task_id='sklearn_etl', bash_command='sudo docker run sklearn_etl', dag=dag)t2 = BashOperator( task_id='sklearn_pipeline', bash_command='sudo docker run sklearn_pipeline', dag=dag)t2.set_upstream(t1)\"\n },\n {\n \"code\": null,\n \"e\": 21232,\n \"s\": 20811,\n \"text\": \"Airflow provides a rich set of functionality and we’ve only touched the surface of what the tool provides. While we were already able to schedule the model pipeline with hosted and managed cloud offerings, it’s useful to schedule the task through Airflow for improved monitoring and versioning. The landscape of workflow tools will change over time, but many of the concepts of Airflow will translate to these new tools.\"\n },\n {\n \"code\": null,\n \"e\": 21711,\n \"s\": 21232,\n \"text\": \"In this chapter we explored a batch model pipeline for applying a machine learning model to a set of users and storing the results to BigQuery. To make the pipeline portable, so that we can execute it in different environments, we created a Docker image to define the required libraries and credentials for the pipeline. We then ran the pipeline on an EC2 instance using batch commands, cron, and Airflow. We also used GKE and Cloud Composer to run the container via Kubernetes.\"\n },\n {\n \"code\": null,\n \"e\": 22165,\n \"s\": 21711,\n \"text\": \"Workflow tools can be tedious to set up, especially when installing a cluster deployment, but they provide a number of benefits over manual approaches. One of the key benefits is the ability to handle DAG configuration as code, which enables code reviews and version control for workflows. It’s useful to get experience with configuration as code, because it is an introduction to another concept called “infra as code” that we’ll explore in Chapter 10.\"\n }\n]"}}},{"rowIdx":570,"cells":{"title":{"kind":"string","value":"Arduino - Servo Motor"},"text":{"kind":"string","value":"A Servo Motor is a small device that has an output shaft. This shaft can be positioned to specific angular positions by sending the servo a coded signal. As long as the coded signal exists on the input line, the servo will maintain the angular position of the shaft. If the coded signal changes, the angular position of the shaft changes. In practice, servos are used in radio-controlled airplanes to position control surfaces like the elevators and rudders. They are also used in radio-controlled cars, puppets, and of course, robots.\nServos are extremely useful in robotics. The motors are small, have built-in control circuitry, and are extremely powerful for their size. A standard servo such as the Futaba S-148 has 42 oz/inches of torque, which is strong for its size. It also draws power proportional to the mechanical load. A lightly loaded servo, therefore, does not consume much energy.\nThe guts of a servo motor is shown in the following picture. You can see the control circuitry, the motor, a set of gears, and the case. You can also see the 3 wires that connect to the outside world. One is for power (+5volts), ground, and the white wire is the control wire.\nThe servo motor has some control circuits and a potentiometer (a variable resistor, aka pot) connected to the output shaft. In the picture above, the pot can be seen on the right side of the circuit board. This pot allows the control circuitry to monitor the current angle of the servo motor.\nIf the shaft is at the correct angle, then the motor shuts off. If the circuit finds that the angle is not correct, it will turn the motor until it is at a desired angle. The output shaft of the servo is capable of traveling somewhere around 180 degrees. Usually, it is somewhere in the 210-degree range, however, it varies depending on the manufacturer. A normal servo is used to control an angular motion of 0 to 180 degrees. It is mechanically not capable of turning any farther due to a mechanical stop built on to the main output gear.\nThe power applied to the motor is proportional to the distance it needs to travel. So, if the shaft needs to turn a large distance, the motor will run at full speed. If it needs to turn only a small amount, the motor will run at a slower speed. This is called proportional control.\nThe control wire is used to communicate the angle. The angle is determined by the duration of a pulse that is applied to the control wire. This is called Pulse Coded Modulation. The servo expects to see a pulse every 20 milliseconds (.02 seconds). The length of the pulse will determine how far the motor turns. A 1.5 millisecond pulse, for example, will make the motor turn to the 90-degree position (often called as the neutral position). If the pulse is shorter than 1.5 milliseconds, then the motor will turn the shaft closer to 0 degrees. If the pulse is longer than 1.5 milliseconds, the shaft turns closer to 180 degrees.\nYou will need the following components −\n1 × Arduino UNO board\n1 × Servo Motor\n1 × ULN2003 driving IC\n1 × 10 KΩ Resistor\nFollow the circuit diagram and make the connections as shown in the image given below.\nOpen the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking on New.\n/* Controlling a servo position using a potentiometer (variable resistor) */\n\n#include \n Servo myservo; // create servo object to control a servo\n int potpin = 0; // analog pin used to connect the potentiometer\n int val; // variable to read the value from the analog pin\n\nvoid setup() {\n myservo.attach(9); // attaches the servo on pin 9 to the servo object\n}\n\nvoid loop() {\n val = analogRead(potpin);\n // reads the value of the potentiometer (value between 0 and 1023)\n val = map(val, 0, 1023, 0, 180);\n // scale it to use it with the servo (value between 0 and 180)\n myservo.write(val); // sets the servo position according to the scaled value\n delay(15);\n}\nServo motors have three terminals - power, ground, and signal. The power wire is typically red, and should be connected to the 5V pin on the Arduino. The ground wire is typically black or brown and should be connected to one terminal of ULN2003 IC (10 -16). To protect your Arduino board from damage, you will need some driver IC to do that. Here we have used ULN2003 IC to drive the servo motor. The signal pin is typically yellow or orange and should be connected to Arduino pin number 9.\nA voltage divider/potential divider are resistors in a series circuit that scale the output voltage to a particular ratio of the input voltage applied. Following is the circuit diagram −\n$$V_{out} = (V_{in} \\times R_{2})/ (R_{1} + R_{2})$$\nVout is the output potential, which depends on the applied input voltage (Vin) and resistors (R1 and R2) in the series. It means that the current flowing through R1 will also flow through R2 without being divided. In the above equation, as the value of R2 changes, the Vout scales accordingly with respect to the input voltage, Vin.\nTypically, a potentiometer is a potential divider, which can scale the output voltage of the circuit based on the value of the variable resistor, which is scaled using the knob. It has three pins: GND, Signal, and +5V as shown in the diagram below −\nBy changing the pot’s NOP position, servo motor will change its angle.\n\n 65 Lectures \n 6.5 hours \n\n Amit Rana\n\n 43 Lectures \n 3 hours \n\n Amit Rana\n\n 20 Lectures \n 2 hours \n\n Ashraf Said\n\n 19 Lectures \n 1.5 hours \n\n Ashraf Said\n\n 11 Lectures \n 47 mins\n\n Ashraf Said\n\n 9 Lectures \n 41 mins\n\n Ashraf Said\n Print\n Add Notes\n Bookmark this page"},"parsed":{"kind":"list like","value":[{"code":null,"e":3406,"s":2870,"text":"A Servo Motor is a small device that has an output shaft. This shaft can be positioned to specific angular positions by sending the servo a coded signal. As long as the coded signal exists on the input line, the servo will maintain the angular position of the shaft. If the coded signal changes, the angular position of the shaft changes. In practice, servos are used in radio-controlled airplanes to position control surfaces like the elevators and rudders. They are also used in radio-controlled cars, puppets, and of course, robots."},{"code":null,"e":3767,"s":3406,"text":"Servos are extremely useful in robotics. The motors are small, have built-in control circuitry, and are extremely powerful for their size. A standard servo such as the Futaba S-148 has 42 oz/inches of torque, which is strong for its size. It also draws power proportional to the mechanical load. A lightly loaded servo, therefore, does not consume much energy."},{"code":null,"e":4044,"s":3767,"text":"The guts of a servo motor is shown in the following picture. You can see the control circuitry, the motor, a set of gears, and the case. You can also see the 3 wires that connect to the outside world. One is for power (+5volts), ground, and the white wire is the control wire."},{"code":null,"e":4337,"s":4044,"text":"The servo motor has some control circuits and a potentiometer (a variable resistor, aka pot) connected to the output shaft. In the picture above, the pot can be seen on the right side of the circuit board. This pot allows the control circuitry to monitor the current angle of the servo motor."},{"code":null,"e":4878,"s":4337,"text":"If the shaft is at the correct angle, then the motor shuts off. If the circuit finds that the angle is not correct, it will turn the motor until it is at a desired angle. The output shaft of the servo is capable of traveling somewhere around 180 degrees. Usually, it is somewhere in the 210-degree range, however, it varies depending on the manufacturer. A normal servo is used to control an angular motion of 0 to 180 degrees. It is mechanically not capable of turning any farther due to a mechanical stop built on to the main output gear."},{"code":null,"e":5160,"s":4878,"text":"The power applied to the motor is proportional to the distance it needs to travel. So, if the shaft needs to turn a large distance, the motor will run at full speed. If it needs to turn only a small amount, the motor will run at a slower speed. This is called proportional control."},{"code":null,"e":5789,"s":5160,"text":"The control wire is used to communicate the angle. The angle is determined by the duration of a pulse that is applied to the control wire. This is called Pulse Coded Modulation. The servo expects to see a pulse every 20 milliseconds (.02 seconds). The length of the pulse will determine how far the motor turns. A 1.5 millisecond pulse, for example, will make the motor turn to the 90-degree position (often called as the neutral position). If the pulse is shorter than 1.5 milliseconds, then the motor will turn the shaft closer to 0 degrees. If the pulse is longer than 1.5 milliseconds, the shaft turns closer to 180 degrees."},{"code":null,"e":5830,"s":5789,"text":"You will need the following components −"},{"code":null,"e":5852,"s":5830,"text":"1 × Arduino UNO board"},{"code":null,"e":5868,"s":5852,"text":"1 × Servo Motor"},{"code":null,"e":5891,"s":5868,"text":"1 × ULN2003 driving IC"},{"code":null,"e":5910,"s":5891,"text":"1 × 10 KΩ Resistor"},{"code":null,"e":5997,"s":5910,"text":"Follow the circuit diagram and make the connections as shown in the image given below."},{"code":null,"e":6146,"s":5997,"text":"Open the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking on New."},{"code":null,"e":6835,"s":6146,"text":"/* Controlling a servo position using a potentiometer (variable resistor) */\n\n#include \n Servo myservo; // create servo object to control a servo\n int potpin = 0; // analog pin used to connect the potentiometer\n int val; // variable to read the value from the analog pin\n\nvoid setup() {\n myservo.attach(9); // attaches the servo on pin 9 to the servo object\n}\n\nvoid loop() {\n val = analogRead(potpin);\n // reads the value of the potentiometer (value between 0 and 1023)\n val = map(val, 0, 1023, 0, 180);\n // scale it to use it with the servo (value between 0 and 180)\n myservo.write(val); // sets the servo position according to the scaled value\n delay(15);\n}"},{"code":null,"e":7326,"s":6835,"text":"Servo motors have three terminals - power, ground, and signal. The power wire is typically red, and should be connected to the 5V pin on the Arduino. The ground wire is typically black or brown and should be connected to one terminal of ULN2003 IC (10 -16). To protect your Arduino board from damage, you will need some driver IC to do that. Here we have used ULN2003 IC to drive the servo motor. The signal pin is typically yellow or orange and should be connected to Arduino pin number 9."},{"code":null,"e":7513,"s":7326,"text":"A voltage divider/potential divider are resistors in a series circuit that scale the output voltage to a particular ratio of the input voltage applied. Following is the circuit diagram −"},{"code":null,"e":7566,"s":7513,"text":"$$V_{out} = (V_{in} \\times R_{2})/ (R_{1} + R_{2})$$"},{"code":null,"e":7899,"s":7566,"text":"Vout is the output potential, which depends on the applied input voltage (Vin) and resistors (R1 and R2) in the series. It means that the current flowing through R1 will also flow through R2 without being divided. In the above equation, as the value of R2 changes, the Vout scales accordingly with respect to the input voltage, Vin."},{"code":null,"e":8149,"s":7899,"text":"Typically, a potentiometer is a potential divider, which can scale the output voltage of the circuit based on the value of the variable resistor, which is scaled using the knob. It has three pins: GND, Signal, and +5V as shown in the diagram below −"},{"code":null,"e":8220,"s":8149,"text":"By changing the pot’s NOP position, servo motor will change its angle."},{"code":null,"e":8255,"s":8220,"text":"\n 65 Lectures \n 6.5 hours \n"},{"code":null,"e":8266,"s":8255,"text":" Amit Rana"},{"code":null,"e":8299,"s":8266,"text":"\n 43 Lectures \n 3 hours \n"},{"code":null,"e":8310,"s":8299,"text":" Amit Rana"},{"code":null,"e":8343,"s":8310,"text":"\n 20 Lectures \n 2 hours \n"},{"code":null,"e":8356,"s":8343,"text":" Ashraf Said"},{"code":null,"e":8391,"s":8356,"text":"\n 19 Lectures \n 1.5 hours \n"},{"code":null,"e":8404,"s":8391,"text":" Ashraf Said"},{"code":null,"e":8436,"s":8404,"text":"\n 11 Lectures \n 47 mins\n"},{"code":null,"e":8449,"s":8436,"text":" Ashraf Said"},{"code":null,"e":8480,"s":8449,"text":"\n 9 Lectures \n 41 mins\n"},{"code":null,"e":8493,"s":8480,"text":" Ashraf Said"},{"code":null,"e":8500,"s":8493,"text":" Print"},{"code":null,"e":8511,"s":8500,"text":" Add Notes"}],"string":"[\n {\n \"code\": null,\n \"e\": 3406,\n \"s\": 2870,\n \"text\": \"A Servo Motor is a small device that has an output shaft. This shaft can be positioned to specific angular positions by sending the servo a coded signal. As long as the coded signal exists on the input line, the servo will maintain the angular position of the shaft. If the coded signal changes, the angular position of the shaft changes. In practice, servos are used in radio-controlled airplanes to position control surfaces like the elevators and rudders. They are also used in radio-controlled cars, puppets, and of course, robots.\"\n },\n {\n \"code\": null,\n \"e\": 3767,\n \"s\": 3406,\n \"text\": \"Servos are extremely useful in robotics. The motors are small, have built-in control circuitry, and are extremely powerful for their size. A standard servo such as the Futaba S-148 has 42 oz/inches of torque, which is strong for its size. It also draws power proportional to the mechanical load. A lightly loaded servo, therefore, does not consume much energy.\"\n },\n {\n \"code\": null,\n \"e\": 4044,\n \"s\": 3767,\n \"text\": \"The guts of a servo motor is shown in the following picture. You can see the control circuitry, the motor, a set of gears, and the case. You can also see the 3 wires that connect to the outside world. One is for power (+5volts), ground, and the white wire is the control wire.\"\n },\n {\n \"code\": null,\n \"e\": 4337,\n \"s\": 4044,\n \"text\": \"The servo motor has some control circuits and a potentiometer (a variable resistor, aka pot) connected to the output shaft. In the picture above, the pot can be seen on the right side of the circuit board. This pot allows the control circuitry to monitor the current angle of the servo motor.\"\n },\n {\n \"code\": null,\n \"e\": 4878,\n \"s\": 4337,\n \"text\": \"If the shaft is at the correct angle, then the motor shuts off. If the circuit finds that the angle is not correct, it will turn the motor until it is at a desired angle. The output shaft of the servo is capable of traveling somewhere around 180 degrees. Usually, it is somewhere in the 210-degree range, however, it varies depending on the manufacturer. A normal servo is used to control an angular motion of 0 to 180 degrees. It is mechanically not capable of turning any farther due to a mechanical stop built on to the main output gear.\"\n },\n {\n \"code\": null,\n \"e\": 5160,\n \"s\": 4878,\n \"text\": \"The power applied to the motor is proportional to the distance it needs to travel. So, if the shaft needs to turn a large distance, the motor will run at full speed. If it needs to turn only a small amount, the motor will run at a slower speed. This is called proportional control.\"\n },\n {\n \"code\": null,\n \"e\": 5789,\n \"s\": 5160,\n \"text\": \"The control wire is used to communicate the angle. The angle is determined by the duration of a pulse that is applied to the control wire. This is called Pulse Coded Modulation. The servo expects to see a pulse every 20 milliseconds (.02 seconds). The length of the pulse will determine how far the motor turns. A 1.5 millisecond pulse, for example, will make the motor turn to the 90-degree position (often called as the neutral position). If the pulse is shorter than 1.5 milliseconds, then the motor will turn the shaft closer to 0 degrees. If the pulse is longer than 1.5 milliseconds, the shaft turns closer to 180 degrees.\"\n },\n {\n \"code\": null,\n \"e\": 5830,\n \"s\": 5789,\n \"text\": \"You will need the following components −\"\n },\n {\n \"code\": null,\n \"e\": 5852,\n \"s\": 5830,\n \"text\": \"1 × Arduino UNO board\"\n },\n {\n \"code\": null,\n \"e\": 5868,\n \"s\": 5852,\n \"text\": \"1 × Servo Motor\"\n },\n {\n \"code\": null,\n \"e\": 5891,\n \"s\": 5868,\n \"text\": \"1 × ULN2003 driving IC\"\n },\n {\n \"code\": null,\n \"e\": 5910,\n \"s\": 5891,\n \"text\": \"1 × 10 KΩ Resistor\"\n },\n {\n \"code\": null,\n \"e\": 5997,\n \"s\": 5910,\n \"text\": \"Follow the circuit diagram and make the connections as shown in the image given below.\"\n },\n {\n \"code\": null,\n \"e\": 6146,\n \"s\": 5997,\n \"text\": \"Open the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking on New.\"\n },\n {\n \"code\": null,\n \"e\": 6835,\n \"s\": 6146,\n \"text\": \"/* Controlling a servo position using a potentiometer (variable resistor) */\\n\\n#include \\n Servo myservo; // create servo object to control a servo\\n int potpin = 0; // analog pin used to connect the potentiometer\\n int val; // variable to read the value from the analog pin\\n\\nvoid setup() {\\n myservo.attach(9); // attaches the servo on pin 9 to the servo object\\n}\\n\\nvoid loop() {\\n val = analogRead(potpin);\\n // reads the value of the potentiometer (value between 0 and 1023)\\n val = map(val, 0, 1023, 0, 180);\\n // scale it to use it with the servo (value between 0 and 180)\\n myservo.write(val); // sets the servo position according to the scaled value\\n delay(15);\\n}\"\n },\n {\n \"code\": null,\n \"e\": 7326,\n \"s\": 6835,\n \"text\": \"Servo motors have three terminals - power, ground, and signal. The power wire is typically red, and should be connected to the 5V pin on the Arduino. The ground wire is typically black or brown and should be connected to one terminal of ULN2003 IC (10 -16). To protect your Arduino board from damage, you will need some driver IC to do that. Here we have used ULN2003 IC to drive the servo motor. The signal pin is typically yellow or orange and should be connected to Arduino pin number 9.\"\n },\n {\n \"code\": null,\n \"e\": 7513,\n \"s\": 7326,\n \"text\": \"A voltage divider/potential divider are resistors in a series circuit that scale the output voltage to a particular ratio of the input voltage applied. Following is the circuit diagram −\"\n },\n {\n \"code\": null,\n \"e\": 7566,\n \"s\": 7513,\n \"text\": \"$$V_{out} = (V_{in} \\\\times R_{2})/ (R_{1} + R_{2})$$\"\n },\n {\n \"code\": null,\n \"e\": 7899,\n \"s\": 7566,\n \"text\": \"Vout is the output potential, which depends on the applied input voltage (Vin) and resistors (R1 and R2) in the series. It means that the current flowing through R1 will also flow through R2 without being divided. In the above equation, as the value of R2 changes, the Vout scales accordingly with respect to the input voltage, Vin.\"\n },\n {\n \"code\": null,\n \"e\": 8149,\n \"s\": 7899,\n \"text\": \"Typically, a potentiometer is a potential divider, which can scale the output voltage of the circuit based on the value of the variable resistor, which is scaled using the knob. It has three pins: GND, Signal, and +5V as shown in the diagram below −\"\n },\n {\n \"code\": null,\n \"e\": 8220,\n \"s\": 8149,\n \"text\": \"By changing the pot’s NOP position, servo motor will change its angle.\"\n },\n {\n \"code\": null,\n \"e\": 8255,\n \"s\": 8220,\n \"text\": \"\\n 65 Lectures \\n 6.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 8266,\n \"s\": 8255,\n \"text\": \" Amit Rana\"\n },\n {\n \"code\": null,\n \"e\": 8299,\n \"s\": 8266,\n \"text\": \"\\n 43 Lectures \\n 3 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 8310,\n \"s\": 8299,\n \"text\": \" Amit Rana\"\n },\n {\n \"code\": null,\n \"e\": 8343,\n \"s\": 8310,\n \"text\": \"\\n 20 Lectures \\n 2 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 8356,\n \"s\": 8343,\n \"text\": \" Ashraf Said\"\n },\n {\n \"code\": null,\n \"e\": 8391,\n \"s\": 8356,\n \"text\": \"\\n 19 Lectures \\n 1.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 8404,\n \"s\": 8391,\n \"text\": \" Ashraf Said\"\n },\n {\n \"code\": null,\n \"e\": 8436,\n \"s\": 8404,\n \"text\": \"\\n 11 Lectures \\n 47 mins\\n\"\n },\n {\n \"code\": null,\n \"e\": 8449,\n \"s\": 8436,\n \"text\": \" Ashraf Said\"\n },\n {\n \"code\": null,\n \"e\": 8480,\n \"s\": 8449,\n \"text\": \"\\n 9 Lectures \\n 41 mins\\n\"\n },\n {\n \"code\": null,\n \"e\": 8493,\n \"s\": 8480,\n \"text\": \" Ashraf Said\"\n },\n {\n \"code\": null,\n \"e\": 8500,\n \"s\": 8493,\n \"text\": \" Print\"\n },\n {\n \"code\": null,\n \"e\": 8511,\n \"s\": 8500,\n \"text\": \" Add Notes\"\n }\n]"}}},{"rowIdx":571,"cells":{"title":{"kind":"string","value":"How to delete all the documents from a collection in MongoDB?"},"text":{"kind":"string","value":"If you want to delete all documents from the collection, you can use deleteMany(). Let us first create a collection and insert some documents to it:\n> db.deleteDocumentsDemo.insert({\"Name\":\"Larry\",\"Age\":23});\nWriteResult({ \"nInserted\" : 1 })\n> db.deleteDocumentsDemo.insert({\"Name\":\"Mike\",\"Age\":21});\nWriteResult({ \"nInserted\" : 1 })\n> db.deleteDocumentsDemo.insert({\"Name\":\"Sam\",\"Age\":24});\nWriteResult({ \"nInserted\" : 1 })\nNow display all the documents from the collection. The query is as follows:\n> db.deleteDocumentsDemo.find().pretty();\nThe following is the output:\n{\n \"_id\" : ObjectId(\"5c6ab0e064f3d70fcc914805\"),\n \"Name\" : \"Larry\",\n \"Age\" : 23\n}\n{\n \"_id\" : ObjectId(\"5c6ab0ef64f3d70fcc914806\"),\n \"Name\" : \"Mike\",\n \"Age\" : 21\n}\n{\n \"_id\" : ObjectId(\"5c6ab0f864f3d70fcc914807\"),\n \"Name\" : \"Sam\",\n \"Age\" : 24\n}\nThe query is as follows:\n> db.deleteDocumentsDemo.deleteMany({});\nThe following is the output:\n{ \"acknowledged\" : true, \"deletedCount\" : 3 }\nLook at the above sample output. Right now, we do not have any documents in the collection ‘deleteDocumentsDemo’ i.e. we have successfully deleted all the documents using the deleteMany() method."},"parsed":{"kind":"list like","value":[{"code":null,"e":1211,"s":1062,"text":"If you want to delete all documents from the collection, you can use deleteMany(). Let us first create a collection and insert some documents to it:"},{"code":null,"e":1487,"s":1211,"text":"> db.deleteDocumentsDemo.insert({\"Name\":\"Larry\",\"Age\":23});\nWriteResult({ \"nInserted\" : 1 })\n> db.deleteDocumentsDemo.insert({\"Name\":\"Mike\",\"Age\":21});\nWriteResult({ \"nInserted\" : 1 })\n> db.deleteDocumentsDemo.insert({\"Name\":\"Sam\",\"Age\":24});\nWriteResult({ \"nInserted\" : 1 })"},{"code":null,"e":1563,"s":1487,"text":"Now display all the documents from the collection. The query is as follows:"},{"code":null,"e":1605,"s":1563,"text":"> db.deleteDocumentsDemo.find().pretty();"},{"code":null,"e":1634,"s":1605,"text":"The following is the output:"},{"code":null,"e":1895,"s":1634,"text":"{\n \"_id\" : ObjectId(\"5c6ab0e064f3d70fcc914805\"),\n \"Name\" : \"Larry\",\n \"Age\" : 23\n}\n{\n \"_id\" : ObjectId(\"5c6ab0ef64f3d70fcc914806\"),\n \"Name\" : \"Mike\",\n \"Age\" : 21\n}\n{\n \"_id\" : ObjectId(\"5c6ab0f864f3d70fcc914807\"),\n \"Name\" : \"Sam\",\n \"Age\" : 24\n}"},{"code":null,"e":1920,"s":1895,"text":"The query is as follows:"},{"code":null,"e":1961,"s":1920,"text":"> db.deleteDocumentsDemo.deleteMany({});"},{"code":null,"e":1990,"s":1961,"text":"The following is the output:"},{"code":null,"e":2036,"s":1990,"text":"{ \"acknowledged\" : true, \"deletedCount\" : 3 }"},{"code":null,"e":2232,"s":2036,"text":"Look at the above sample output. Right now, we do not have any documents in the collection ‘deleteDocumentsDemo’ i.e. we have successfully deleted all the documents using the deleteMany() method."}],"string":"[\n {\n \"code\": null,\n \"e\": 1211,\n \"s\": 1062,\n \"text\": \"If you want to delete all documents from the collection, you can use deleteMany(). Let us first create a collection and insert some documents to it:\"\n },\n {\n \"code\": null,\n \"e\": 1487,\n \"s\": 1211,\n \"text\": \"> db.deleteDocumentsDemo.insert({\\\"Name\\\":\\\"Larry\\\",\\\"Age\\\":23});\\nWriteResult({ \\\"nInserted\\\" : 1 })\\n> db.deleteDocumentsDemo.insert({\\\"Name\\\":\\\"Mike\\\",\\\"Age\\\":21});\\nWriteResult({ \\\"nInserted\\\" : 1 })\\n> db.deleteDocumentsDemo.insert({\\\"Name\\\":\\\"Sam\\\",\\\"Age\\\":24});\\nWriteResult({ \\\"nInserted\\\" : 1 })\"\n },\n {\n \"code\": null,\n \"e\": 1563,\n \"s\": 1487,\n \"text\": \"Now display all the documents from the collection. The query is as follows:\"\n },\n {\n \"code\": null,\n \"e\": 1605,\n \"s\": 1563,\n \"text\": \"> db.deleteDocumentsDemo.find().pretty();\"\n },\n {\n \"code\": null,\n \"e\": 1634,\n \"s\": 1605,\n \"text\": \"The following is the output:\"\n },\n {\n \"code\": null,\n \"e\": 1895,\n \"s\": 1634,\n \"text\": \"{\\n \\\"_id\\\" : ObjectId(\\\"5c6ab0e064f3d70fcc914805\\\"),\\n \\\"Name\\\" : \\\"Larry\\\",\\n \\\"Age\\\" : 23\\n}\\n{\\n \\\"_id\\\" : ObjectId(\\\"5c6ab0ef64f3d70fcc914806\\\"),\\n \\\"Name\\\" : \\\"Mike\\\",\\n \\\"Age\\\" : 21\\n}\\n{\\n \\\"_id\\\" : ObjectId(\\\"5c6ab0f864f3d70fcc914807\\\"),\\n \\\"Name\\\" : \\\"Sam\\\",\\n \\\"Age\\\" : 24\\n}\"\n },\n {\n \"code\": null,\n \"e\": 1920,\n \"s\": 1895,\n \"text\": \"The query is as follows:\"\n },\n {\n \"code\": null,\n \"e\": 1961,\n \"s\": 1920,\n \"text\": \"> db.deleteDocumentsDemo.deleteMany({});\"\n },\n {\n \"code\": null,\n \"e\": 1990,\n \"s\": 1961,\n \"text\": \"The following is the output:\"\n },\n {\n \"code\": null,\n \"e\": 2036,\n \"s\": 1990,\n \"text\": \"{ \\\"acknowledged\\\" : true, \\\"deletedCount\\\" : 3 }\"\n },\n {\n \"code\": null,\n \"e\": 2232,\n \"s\": 2036,\n \"text\": \"Look at the above sample output. Right now, we do not have any documents in the collection ‘deleteDocumentsDemo’ i.e. we have successfully deleted all the documents using the deleteMany() method.\"\n }\n]"}}},{"rowIdx":572,"cells":{"title":{"kind":"string","value":"How to print characters from a string starting from 3rd to 5th in Python?"},"text":{"kind":"string","value":"Slicing feature in Python helps fetch a substring from original string. Slice operator [:] needs two operands. First operand is an integer representing index of starting character of slice. Second operand is index of character next to slice. Recalling that index of sequence starts from 0,\n>>> string = 'abcdefghij'\n>>> string[2:5]\n 'cde'\nHere 3rd character of slice ‘cde’ starts at index 2 and ends at 4, so second operand is given as 5"},"parsed":{"kind":"list like","value":[{"code":null,"e":1352,"s":1062,"text":"Slicing feature in Python helps fetch a substring from original string. Slice operator [:] needs two operands. First operand is an integer representing index of starting character of slice. Second operand is index of character next to slice. Recalling that index of sequence starts from 0,"},{"code":null,"e":1401,"s":1352,"text":">>> string = 'abcdefghij'\n>>> string[2:5]\n 'cde'"},{"code":null,"e":1500,"s":1401,"text":"Here 3rd character of slice ‘cde’ starts at index 2 and ends at 4, so second operand is given as 5"}],"string":"[\n {\n \"code\": null,\n \"e\": 1352,\n \"s\": 1062,\n \"text\": \"Slicing feature in Python helps fetch a substring from original string. Slice operator [:] needs two operands. First operand is an integer representing index of starting character of slice. Second operand is index of character next to slice. Recalling that index of sequence starts from 0,\"\n },\n {\n \"code\": null,\n \"e\": 1401,\n \"s\": 1352,\n \"text\": \">>> string = 'abcdefghij'\\n>>> string[2:5]\\n 'cde'\"\n },\n {\n \"code\": null,\n \"e\": 1500,\n \"s\": 1401,\n \"text\": \"Here 3rd character of slice ‘cde’ starts at index 2 and ends at 4, so second operand is given as 5\"\n }\n]"}}},{"rowIdx":573,"cells":{"title":{"kind":"string","value":"How to capture divide by zero exception in C#?"},"text":{"kind":"string","value":"System.DivideByZeroException is a class that handles errors generated from dividing a dividend with zero.\nLet us see an example −\nusing System;\n\nnamespace ErrorHandlingApplication {\n class DivNumbers {\n int result;\n\n DivNumbers() {\n result = 0;\n }\n public void division(int num1, int num2) {\n try {\n result = num1 / num2;\n } catch (DivideByZeroException e) {\n Console.WriteLine(\"Exception caught: {0}\", e);\n } finally {\n Console.WriteLine(\"Result: {0}\", result);\n }\n }\n static void Main(string[] args) {\n DivNumbers d = new DivNumbers();\n d.division(25, 0);\n Console.ReadKey();\n }\n }\n}\nThe values entered here is num1/ num2 −\nresult = num1 / num2;\nAbove, if num2 is set to 0, then the DivideByZeroException is caught since we have handled exception above."},"parsed":{"kind":"list like","value":[{"code":null,"e":1168,"s":1062,"text":"System.DivideByZeroException is a class that handles errors generated from dividing a dividend with zero."},{"code":null,"e":1192,"s":1168,"text":"Let us see an example −"},{"code":null,"e":1784,"s":1192,"text":"using System;\n\nnamespace ErrorHandlingApplication {\n class DivNumbers {\n int result;\n\n DivNumbers() {\n result = 0;\n }\n public void division(int num1, int num2) {\n try {\n result = num1 / num2;\n } catch (DivideByZeroException e) {\n Console.WriteLine(\"Exception caught: {0}\", e);\n } finally {\n Console.WriteLine(\"Result: {0}\", result);\n }\n }\n static void Main(string[] args) {\n DivNumbers d = new DivNumbers();\n d.division(25, 0);\n Console.ReadKey();\n }\n }\n}"},{"code":null,"e":1824,"s":1784,"text":"The values entered here is num1/ num2 −"},{"code":null,"e":1846,"s":1824,"text":"result = num1 / num2;"},{"code":null,"e":1954,"s":1846,"text":"Above, if num2 is set to 0, then the DivideByZeroException is caught since we have handled exception above."}],"string":"[\n {\n \"code\": null,\n \"e\": 1168,\n \"s\": 1062,\n \"text\": \"System.DivideByZeroException is a class that handles errors generated from dividing a dividend with zero.\"\n },\n {\n \"code\": null,\n \"e\": 1192,\n \"s\": 1168,\n \"text\": \"Let us see an example −\"\n },\n {\n \"code\": null,\n \"e\": 1784,\n \"s\": 1192,\n \"text\": \"using System;\\n\\nnamespace ErrorHandlingApplication {\\n class DivNumbers {\\n int result;\\n\\n DivNumbers() {\\n result = 0;\\n }\\n public void division(int num1, int num2) {\\n try {\\n result = num1 / num2;\\n } catch (DivideByZeroException e) {\\n Console.WriteLine(\\\"Exception caught: {0}\\\", e);\\n } finally {\\n Console.WriteLine(\\\"Result: {0}\\\", result);\\n }\\n }\\n static void Main(string[] args) {\\n DivNumbers d = new DivNumbers();\\n d.division(25, 0);\\n Console.ReadKey();\\n }\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 1824,\n \"s\": 1784,\n \"text\": \"The values entered here is num1/ num2 −\"\n },\n {\n \"code\": null,\n \"e\": 1846,\n \"s\": 1824,\n \"text\": \"result = num1 / num2;\"\n },\n {\n \"code\": null,\n \"e\": 1954,\n \"s\": 1846,\n \"text\": \"Above, if num2 is set to 0, then the DivideByZeroException is caught since we have handled exception above.\"\n }\n]"}}},{"rowIdx":574,"cells":{"title":{"kind":"string","value":"Android - Navigation"},"text":{"kind":"string","value":"In this chapter, we will see that how you can provide navigation forward and backward between an application. We will first look at how to provide up navigation in an application.\nThe up navigation will allow our application to move to previous activity from the next activity. It can be done like this.\nTo implement Up navigation, the first step is to declare which activity is the appropriate parent for each activity. You can do it by specifying parentActivityName attribute in an activity. Its syntax is given below −\nandroid:parentActivityName = \"com.example.test.MainActivity\" \n\nAfter that you need to call setDisplayHomeAsUpEnabled method of getActionBar() in the onCreate method of the activity. This will enable the back button in the top action bar.\ngetActionBar().setDisplayHomeAsUpEnabled(true);\nThe last thing you need to do is to override onOptionsItemSelected method. when the user presses it, your activity receives a call to onOptionsItemSelected(). The ID for the action is android.R.id.home.Its syntax is given below −\npublic boolean onOptionsItemSelected(MenuItem item) {\n \n switch (item.getItemId()) {\n case android.R.id.home:\n NavUtils.navigateUpFromSameTask(this);\n return true;\n }\t\n}\nSince you have enabled your back button to navigate within your application, you might want to put the application close function in the device back button.\nIt can be done by overriding onBackPressed and then calling moveTaskToBack and finish method. Its syntax is given below −\n@Override\npublic void onBackPressed() {\n moveTaskToBack(true); \n MainActivity2.this.finish();\n}\nApart from this setDisplayHomeAsUpEnabled method, there are other methods available in ActionBar API class. They are listed below −\naddTab(ActionBar.Tab tab, boolean setSelected)\nThis method adds a tab for use in tabbed navigation mode\ngetSelectedTab()\nThis method returns the currently selected tab if in tabbed navigation mode and there is at least one tab present\nhide()\nThis method hide the ActionBar if it is currently showing\nremoveAllTabs()\nThis method remove all tabs from the action bar and deselect the current tab\nselectTab(ActionBar.Tab tab)\nThis method select the specified tab\nThe below example demonstrates the use of Navigation. It crates a basic application that allows you to navigate within your application.\nTo experiment with this example, you need to run this on an actual device or in an emulator.\nHere is the content of src/MainActivity.java.\npackage com.example.sairamkrishna.myapplication;\n\nimport android.app.Activity;\nimport android.content.Intent;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.Button;\n\npublic class MainActivity extends Activity {\n Button b1;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n\n b1 = (Button) findViewById(R.id.button);\n b1.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n Intent in=new Intent(MainActivity.this,second_main.class);\n startActivity(in);\n }\n });\n }\n}\nHere is the content of src/second_main.java.\npackage com.example.sairamkrishna.myapplication;\n\nimport android.app.Activity;\nimport android.os.Bundle;\nimport android.webkit.WebView;\nimport android.webkit.WebViewClient;\n\n/**\n * Created by Sairamkrishna on 4/6/2015.\n*/\n\npublic class second_main extends Activity {\n WebView wv;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main_activity2);\n\n wv = (WebView) findViewById(R.id.webView);\n wv.setWebViewClient(new MyBrowser());\n wv.getSettings().setLoadsImagesAutomatically(true);\n wv.getSettings().setJavaScriptEnabled(true);\n wv.loadUrl(\"http://www.tutorialspoint.com\");\n }\n\n private class MyBrowser extends WebViewClient {\n @Override\n public boolean shouldOverrideUrlLoading(WebView view, String url) {\n view.loadUrl(url);\n return true;\n }\n }\n}\nHere is the content of activity_main.xml.\n\n\n \n \n \n \n \n \n \n \n\n\nHere is the content of activity_main_activity2.xml.\n\n\n \n \n\n\nHere is the content of Strings.xml.\n\n My Application\n\nHere is the content of AndroidManifest.xml.\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n\t\t\n \n \n \n\nLet's try to run your application. I assume you had created your AVD while doing environment setup. To run the app from Android studio, open one of your project's activity files and click Run icon from the toolbar. Android studio installs the app on your AVD and starts it and if everything is fine with your setup and application, it will display following Emulator window−\nNow just press on button and the following screen will be shown to you.\nSecond activity contains webview, it has redirected to tutorialspoint.com as shown below\n\n 46 Lectures \n 7.5 hours \n\n Aditya Dua\n\n 32 Lectures \n 3.5 hours \n\n Sharad Kumar\n\n 9 Lectures \n 1 hours \n\n Abhilash Nelson\n\n 14 Lectures \n 1.5 hours \n\n Abhilash Nelson\n\n 15 Lectures \n 1.5 hours \n\n Abhilash Nelson\n\n 10 Lectures \n 1 hours \n\n Abhilash Nelson\n Print\n Add Notes\n Bookmark this page"},"parsed":{"kind":"list like","value":[{"code":null,"e":3787,"s":3607,"text":"In this chapter, we will see that how you can provide navigation forward and backward between an application. We will first look at how to provide up navigation in an application."},{"code":null,"e":3911,"s":3787,"text":"The up navigation will allow our application to move to previous activity from the next activity. It can be done like this."},{"code":null,"e":4129,"s":3911,"text":"To implement Up navigation, the first step is to declare which activity is the appropriate parent for each activity. You can do it by specifying parentActivityName attribute in an activity. Its syntax is given below −"},{"code":null,"e":4192,"s":4129,"text":"android:parentActivityName = \"com.example.test.MainActivity\" \n"},{"code":null,"e":4367,"s":4192,"text":"After that you need to call setDisplayHomeAsUpEnabled method of getActionBar() in the onCreate method of the activity. This will enable the back button in the top action bar."},{"code":null,"e":4415,"s":4367,"text":"getActionBar().setDisplayHomeAsUpEnabled(true);"},{"code":null,"e":4645,"s":4415,"text":"The last thing you need to do is to override onOptionsItemSelected method. when the user presses it, your activity receives a call to onOptionsItemSelected(). The ID for the action is android.R.id.home.Its syntax is given below −"},{"code":null,"e":4836,"s":4645,"text":"public boolean onOptionsItemSelected(MenuItem item) {\n \n switch (item.getItemId()) {\n case android.R.id.home:\n NavUtils.navigateUpFromSameTask(this);\n return true;\n }\t\n}"},{"code":null,"e":4993,"s":4836,"text":"Since you have enabled your back button to navigate within your application, you might want to put the application close function in the device back button."},{"code":null,"e":5115,"s":4993,"text":"It can be done by overriding onBackPressed and then calling moveTaskToBack and finish method. Its syntax is given below −"},{"code":null,"e":5215,"s":5115,"text":"@Override\npublic void onBackPressed() {\n moveTaskToBack(true); \n MainActivity2.this.finish();\n}"},{"code":null,"e":5347,"s":5215,"text":"Apart from this setDisplayHomeAsUpEnabled method, there are other methods available in ActionBar API class. They are listed below −"},{"code":null,"e":5394,"s":5347,"text":"addTab(ActionBar.Tab tab, boolean setSelected)"},{"code":null,"e":5451,"s":5394,"text":"This method adds a tab for use in tabbed navigation mode"},{"code":null,"e":5468,"s":5451,"text":"getSelectedTab()"},{"code":null,"e":5582,"s":5468,"text":"This method returns the currently selected tab if in tabbed navigation mode and there is at least one tab present"},{"code":null,"e":5589,"s":5582,"text":"hide()"},{"code":null,"e":5647,"s":5589,"text":"This method hide the ActionBar if it is currently showing"},{"code":null,"e":5663,"s":5647,"text":"removeAllTabs()"},{"code":null,"e":5740,"s":5663,"text":"This method remove all tabs from the action bar and deselect the current tab"},{"code":null,"e":5769,"s":5740,"text":"selectTab(ActionBar.Tab tab)"},{"code":null,"e":5806,"s":5769,"text":"This method select the specified tab"},{"code":null,"e":5943,"s":5806,"text":"The below example demonstrates the use of Navigation. It crates a basic application that allows you to navigate within your application."},{"code":null,"e":6036,"s":5943,"text":"To experiment with this example, you need to run this on an actual device or in an emulator."},{"code":null,"e":6082,"s":6036,"text":"Here is the content of src/MainActivity.java."},{"code":null,"e":6786,"s":6082,"text":"package com.example.sairamkrishna.myapplication;\n\nimport android.app.Activity;\nimport android.content.Intent;\nimport android.os.Bundle;\nimport android.view.View;\nimport android.widget.Button;\n\npublic class MainActivity extends Activity {\n Button b1;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n\n b1 = (Button) findViewById(R.id.button);\n b1.setOnClickListener(new View.OnClickListener() {\n @Override\n public void onClick(View v) {\n Intent in=new Intent(MainActivity.this,second_main.class);\n startActivity(in);\n }\n });\n }\n}"},{"code":null,"e":6831,"s":6786,"text":"Here is the content of src/second_main.java."},{"code":null,"e":7749,"s":6831,"text":"package com.example.sairamkrishna.myapplication;\n\nimport android.app.Activity;\nimport android.os.Bundle;\nimport android.webkit.WebView;\nimport android.webkit.WebViewClient;\n\n/**\n * Created by Sairamkrishna on 4/6/2015.\n*/\n\npublic class second_main extends Activity {\n WebView wv;\n\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main_activity2);\n\n wv = (WebView) findViewById(R.id.webView);\n wv.setWebViewClient(new MyBrowser());\n wv.getSettings().setLoadsImagesAutomatically(true);\n wv.getSettings().setJavaScriptEnabled(true);\n wv.loadUrl(\"http://www.tutorialspoint.com\");\n }\n\n private class MyBrowser extends WebViewClient {\n @Override\n public boolean shouldOverrideUrlLoading(WebView view, String url) {\n view.loadUrl(url);\n return true;\n }\n }\n}"},{"code":null,"e":7791,"s":7749,"text":"Here is the content of activity_main.xml."},{"code":null,"e":9731,"s":7791,"text":"\n\n \n \n \n \n \n \n \n \n\n"},{"code":null,"e":9783,"s":9731,"text":"Here is the content of activity_main_activity2.xml."},{"code":null,"e":10268,"s":9783,"text":"\n\n \n \n\n"},{"code":null,"e":10304,"s":10268,"text":"Here is the content of Strings.xml."},{"code":null,"e":10380,"s":10304,"text":"\n My Application\n"},{"code":null,"e":10424,"s":10380,"text":"Here is the content of AndroidManifest.xml."},{"code":null,"e":11270,"s":10424,"text":"\n\n \n \n \n \n \n \n \n \n \n \n \n \n\t\t\n \n \n \n"},{"code":null,"e":11646,"s":11270,"text":"Let's try to run your application. I assume you had created your AVD while doing environment setup. To run the app from Android studio, open one of your project's activity files and click Run icon from the toolbar. Android studio installs the app on your AVD and starts it and if everything is fine with your setup and application, it will display following Emulator window−"},{"code":null,"e":11718,"s":11646,"text":"Now just press on button and the following screen will be shown to you."},{"code":null,"e":11807,"s":11718,"text":"Second activity contains webview, it has redirected to tutorialspoint.com as shown below"},{"code":null,"e":11842,"s":11807,"text":"\n 46 Lectures \n 7.5 hours \n"},{"code":null,"e":11854,"s":11842,"text":" Aditya Dua"},{"code":null,"e":11889,"s":11854,"text":"\n 32 Lectures \n 3.5 hours \n"},{"code":null,"e":11903,"s":11889,"text":" Sharad Kumar"},{"code":null,"e":11935,"s":11903,"text":"\n 9 Lectures \n 1 hours \n"},{"code":null,"e":11952,"s":11935,"text":" Abhilash Nelson"},{"code":null,"e":11987,"s":11952,"text":"\n 14 Lectures \n 1.5 hours \n"},{"code":null,"e":12004,"s":11987,"text":" Abhilash Nelson"},{"code":null,"e":12039,"s":12004,"text":"\n 15 Lectures \n 1.5 hours \n"},{"code":null,"e":12056,"s":12039,"text":" Abhilash Nelson"},{"code":null,"e":12089,"s":12056,"text":"\n 10 Lectures \n 1 hours \n"},{"code":null,"e":12106,"s":12089,"text":" Abhilash Nelson"},{"code":null,"e":12113,"s":12106,"text":" Print"},{"code":null,"e":12124,"s":12113,"text":" Add Notes"}],"string":"[\n {\n \"code\": null,\n \"e\": 3787,\n \"s\": 3607,\n \"text\": \"In this chapter, we will see that how you can provide navigation forward and backward between an application. We will first look at how to provide up navigation in an application.\"\n },\n {\n \"code\": null,\n \"e\": 3911,\n \"s\": 3787,\n \"text\": \"The up navigation will allow our application to move to previous activity from the next activity. It can be done like this.\"\n },\n {\n \"code\": null,\n \"e\": 4129,\n \"s\": 3911,\n \"text\": \"To implement Up navigation, the first step is to declare which activity is the appropriate parent for each activity. You can do it by specifying parentActivityName attribute in an activity. Its syntax is given below −\"\n },\n {\n \"code\": null,\n \"e\": 4192,\n \"s\": 4129,\n \"text\": \"android:parentActivityName = \\\"com.example.test.MainActivity\\\" \\n\"\n },\n {\n \"code\": null,\n \"e\": 4367,\n \"s\": 4192,\n \"text\": \"After that you need to call setDisplayHomeAsUpEnabled method of getActionBar() in the onCreate method of the activity. This will enable the back button in the top action bar.\"\n },\n {\n \"code\": null,\n \"e\": 4415,\n \"s\": 4367,\n \"text\": \"getActionBar().setDisplayHomeAsUpEnabled(true);\"\n },\n {\n \"code\": null,\n \"e\": 4645,\n \"s\": 4415,\n \"text\": \"The last thing you need to do is to override onOptionsItemSelected method. when the user presses it, your activity receives a call to onOptionsItemSelected(). The ID for the action is android.R.id.home.Its syntax is given below −\"\n },\n {\n \"code\": null,\n \"e\": 4836,\n \"s\": 4645,\n \"text\": \"public boolean onOptionsItemSelected(MenuItem item) {\\n \\n switch (item.getItemId()) {\\n case android.R.id.home:\\n NavUtils.navigateUpFromSameTask(this);\\n return true;\\n }\\t\\n}\"\n },\n {\n \"code\": null,\n \"e\": 4993,\n \"s\": 4836,\n \"text\": \"Since you have enabled your back button to navigate within your application, you might want to put the application close function in the device back button.\"\n },\n {\n \"code\": null,\n \"e\": 5115,\n \"s\": 4993,\n \"text\": \"It can be done by overriding onBackPressed and then calling moveTaskToBack and finish method. Its syntax is given below −\"\n },\n {\n \"code\": null,\n \"e\": 5215,\n \"s\": 5115,\n \"text\": \"@Override\\npublic void onBackPressed() {\\n moveTaskToBack(true); \\n MainActivity2.this.finish();\\n}\"\n },\n {\n \"code\": null,\n \"e\": 5347,\n \"s\": 5215,\n \"text\": \"Apart from this setDisplayHomeAsUpEnabled method, there are other methods available in ActionBar API class. They are listed below −\"\n },\n {\n \"code\": null,\n \"e\": 5394,\n \"s\": 5347,\n \"text\": \"addTab(ActionBar.Tab tab, boolean setSelected)\"\n },\n {\n \"code\": null,\n \"e\": 5451,\n \"s\": 5394,\n \"text\": \"This method adds a tab for use in tabbed navigation mode\"\n },\n {\n \"code\": null,\n \"e\": 5468,\n \"s\": 5451,\n \"text\": \"getSelectedTab()\"\n },\n {\n \"code\": null,\n \"e\": 5582,\n \"s\": 5468,\n \"text\": \"This method returns the currently selected tab if in tabbed navigation mode and there is at least one tab present\"\n },\n {\n \"code\": null,\n \"e\": 5589,\n \"s\": 5582,\n \"text\": \"hide()\"\n },\n {\n \"code\": null,\n \"e\": 5647,\n \"s\": 5589,\n \"text\": \"This method hide the ActionBar if it is currently showing\"\n },\n {\n \"code\": null,\n \"e\": 5663,\n \"s\": 5647,\n \"text\": \"removeAllTabs()\"\n },\n {\n \"code\": null,\n \"e\": 5740,\n \"s\": 5663,\n \"text\": \"This method remove all tabs from the action bar and deselect the current tab\"\n },\n {\n \"code\": null,\n \"e\": 5769,\n \"s\": 5740,\n \"text\": \"selectTab(ActionBar.Tab tab)\"\n },\n {\n \"code\": null,\n \"e\": 5806,\n \"s\": 5769,\n \"text\": \"This method select the specified tab\"\n },\n {\n \"code\": null,\n \"e\": 5943,\n \"s\": 5806,\n \"text\": \"The below example demonstrates the use of Navigation. It crates a basic application that allows you to navigate within your application.\"\n },\n {\n \"code\": null,\n \"e\": 6036,\n \"s\": 5943,\n \"text\": \"To experiment with this example, you need to run this on an actual device or in an emulator.\"\n },\n {\n \"code\": null,\n \"e\": 6082,\n \"s\": 6036,\n \"text\": \"Here is the content of src/MainActivity.java.\"\n },\n {\n \"code\": null,\n \"e\": 6786,\n \"s\": 6082,\n \"text\": \"package com.example.sairamkrishna.myapplication;\\n\\nimport android.app.Activity;\\nimport android.content.Intent;\\nimport android.os.Bundle;\\nimport android.view.View;\\nimport android.widget.Button;\\n\\npublic class MainActivity extends Activity {\\n Button b1;\\n\\n @Override\\n protected void onCreate(Bundle savedInstanceState) {\\n super.onCreate(savedInstanceState);\\n setContentView(R.layout.activity_main);\\n\\n b1 = (Button) findViewById(R.id.button);\\n b1.setOnClickListener(new View.OnClickListener() {\\n @Override\\n public void onClick(View v) {\\n Intent in=new Intent(MainActivity.this,second_main.class);\\n startActivity(in);\\n }\\n });\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 6831,\n \"s\": 6786,\n \"text\": \"Here is the content of src/second_main.java.\"\n },\n {\n \"code\": null,\n \"e\": 7749,\n \"s\": 6831,\n \"text\": \"package com.example.sairamkrishna.myapplication;\\n\\nimport android.app.Activity;\\nimport android.os.Bundle;\\nimport android.webkit.WebView;\\nimport android.webkit.WebViewClient;\\n\\n/**\\n * Created by Sairamkrishna on 4/6/2015.\\n*/\\n\\npublic class second_main extends Activity {\\n WebView wv;\\n\\n @Override\\n protected void onCreate(Bundle savedInstanceState) {\\n super.onCreate(savedInstanceState);\\n setContentView(R.layout.activity_main_activity2);\\n\\n wv = (WebView) findViewById(R.id.webView);\\n wv.setWebViewClient(new MyBrowser());\\n wv.getSettings().setLoadsImagesAutomatically(true);\\n wv.getSettings().setJavaScriptEnabled(true);\\n wv.loadUrl(\\\"http://www.tutorialspoint.com\\\");\\n }\\n\\n private class MyBrowser extends WebViewClient {\\n @Override\\n public boolean shouldOverrideUrlLoading(WebView view, String url) {\\n view.loadUrl(url);\\n return true;\\n }\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 7791,\n \"s\": 7749,\n \"text\": \"Here is the content of activity_main.xml.\"\n },\n {\n \"code\": null,\n \"e\": 9731,\n \"s\": 7791,\n \"text\": \"\\n\\n \\n \\n \\n \\n \\n \\n \\n \\n\\n\"\n },\n {\n \"code\": null,\n \"e\": 9783,\n \"s\": 9731,\n \"text\": \"Here is the content of activity_main_activity2.xml.\"\n },\n {\n \"code\": null,\n \"e\": 10268,\n \"s\": 9783,\n \"text\": \"\\n\\n \\n \\n\\n\"\n },\n {\n \"code\": null,\n \"e\": 10304,\n \"s\": 10268,\n \"text\": \"Here is the content of Strings.xml.\"\n },\n {\n \"code\": null,\n \"e\": 10380,\n \"s\": 10304,\n \"text\": \"\\n My Application\\n\"\n },\n {\n \"code\": null,\n \"e\": 10424,\n \"s\": 10380,\n \"text\": \"Here is the content of AndroidManifest.xml.\"\n },\n {\n \"code\": null,\n \"e\": 11270,\n \"s\": 10424,\n \"text\": \"\\n\\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n \\n\\t\\t\\n \\n \\n \\n\"\n },\n {\n \"code\": null,\n \"e\": 11646,\n \"s\": 11270,\n \"text\": \"Let's try to run your application. I assume you had created your AVD while doing environment setup. To run the app from Android studio, open one of your project's activity files and click Run icon from the toolbar. Android studio installs the app on your AVD and starts it and if everything is fine with your setup and application, it will display following Emulator window−\"\n },\n {\n \"code\": null,\n \"e\": 11718,\n \"s\": 11646,\n \"text\": \"Now just press on button and the following screen will be shown to you.\"\n },\n {\n \"code\": null,\n \"e\": 11807,\n \"s\": 11718,\n \"text\": \"Second activity contains webview, it has redirected to tutorialspoint.com as shown below\"\n },\n {\n \"code\": null,\n \"e\": 11842,\n \"s\": 11807,\n \"text\": \"\\n 46 Lectures \\n 7.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 11854,\n \"s\": 11842,\n \"text\": \" Aditya Dua\"\n },\n {\n \"code\": null,\n \"e\": 11889,\n \"s\": 11854,\n \"text\": \"\\n 32 Lectures \\n 3.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 11903,\n \"s\": 11889,\n \"text\": \" Sharad Kumar\"\n },\n {\n \"code\": null,\n \"e\": 11935,\n \"s\": 11903,\n \"text\": \"\\n 9 Lectures \\n 1 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 11952,\n \"s\": 11935,\n \"text\": \" Abhilash Nelson\"\n },\n {\n \"code\": null,\n \"e\": 11987,\n \"s\": 11952,\n \"text\": \"\\n 14 Lectures \\n 1.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 12004,\n \"s\": 11987,\n \"text\": \" Abhilash Nelson\"\n },\n {\n \"code\": null,\n \"e\": 12039,\n \"s\": 12004,\n \"text\": \"\\n 15 Lectures \\n 1.5 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 12056,\n \"s\": 12039,\n \"text\": \" Abhilash Nelson\"\n },\n {\n \"code\": null,\n \"e\": 12089,\n \"s\": 12056,\n \"text\": \"\\n 10 Lectures \\n 1 hours \\n\"\n },\n {\n \"code\": null,\n \"e\": 12106,\n \"s\": 12089,\n \"text\": \" Abhilash Nelson\"\n },\n {\n \"code\": null,\n \"e\": 12113,\n \"s\": 12106,\n \"text\": \" Print\"\n },\n {\n \"code\": null,\n \"e\": 12124,\n \"s\": 12113,\n \"text\": \" Add Notes\"\n }\n]"}}},{"rowIdx":575,"cells":{"title":{"kind":"string","value":"How can we change the JButton text dynamically in Java?\n"},"text":{"kind":"string","value":"A JButton is a subclass of AbstractButton and it can be used for adding platform-independent buttons in a Java Swing application. A JButon can generate an ActionListener interface when the user clicking on a button, it can also generate the MouseListener and KeyListener interfaces. By default, we can create a JButton with a text and also can change the text of a JButton by input some text in the text field and click on the button, it will call the actionPerformed() method of ActionListener interface and set an updated text in a button by calling setText(textField.getText()) method of a JButton class.\nimport java.awt.*;\nimport java.awt.event.*;\nimport javax.swing.*;\npublic class JButtonTextChangeTest extends JFrame {\n private JTextField textField;\n private JButton button;\n public JButtonTextChangeTest() {\n setTitle(\"JButtonTextChange Test\");\n setLayout(new FlowLayout());\n textField = new JTextField(20);\n button = new JButton(\"Initial Button\");\n button.addActionListener(new ActionListener() {\n public void actionPerformed(ActionEvent ae) {\n if (!textField.getText().equals(\"\"))\n button.setText(textField.getText());\n }\n });\n add(textField);\n add(button);\n setSize(400, 300);\n setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n setLocationRelativeTo(null);\n setVisible(true);\n }\n public static void main(String[] args) {\n new JButtonTextChangeTest();\n }\n}"},"parsed":{"kind":"list like","value":[{"code":null,"e":1670,"s":1062,"text":"A JButton is a subclass of AbstractButton and it can be used for adding platform-independent buttons in a Java Swing application. A JButon can generate an ActionListener interface when the user clicking on a button, it can also generate the MouseListener and KeyListener interfaces. By default, we can create a JButton with a text and also can change the text of a JButton by input some text in the text field and click on the button, it will call the actionPerformed() method of ActionListener interface and set an updated text in a button by calling setText(textField.getText()) method of a JButton class."},{"code":null,"e":2546,"s":1670,"text":"import java.awt.*;\nimport java.awt.event.*;\nimport javax.swing.*;\npublic class JButtonTextChangeTest extends JFrame {\n private JTextField textField;\n private JButton button;\n public JButtonTextChangeTest() {\n setTitle(\"JButtonTextChange Test\");\n setLayout(new FlowLayout());\n textField = new JTextField(20);\n button = new JButton(\"Initial Button\");\n button.addActionListener(new ActionListener() {\n public void actionPerformed(ActionEvent ae) {\n if (!textField.getText().equals(\"\"))\n button.setText(textField.getText());\n }\n });\n add(textField);\n add(button);\n setSize(400, 300);\n setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n setLocationRelativeTo(null);\n setVisible(true);\n }\n public static void main(String[] args) {\n new JButtonTextChangeTest();\n }\n}"}],"string":"[\n {\n \"code\": null,\n \"e\": 1670,\n \"s\": 1062,\n \"text\": \"A JButton is a subclass of AbstractButton and it can be used for adding platform-independent buttons in a Java Swing application. A JButon can generate an ActionListener interface when the user clicking on a button, it can also generate the MouseListener and KeyListener interfaces. By default, we can create a JButton with a text and also can change the text of a JButton by input some text in the text field and click on the button, it will call the actionPerformed() method of ActionListener interface and set an updated text in a button by calling setText(textField.getText()) method of a JButton class.\"\n },\n {\n \"code\": null,\n \"e\": 2546,\n \"s\": 1670,\n \"text\": \"import java.awt.*;\\nimport java.awt.event.*;\\nimport javax.swing.*;\\npublic class JButtonTextChangeTest extends JFrame {\\n private JTextField textField;\\n private JButton button;\\n public JButtonTextChangeTest() {\\n setTitle(\\\"JButtonTextChange Test\\\");\\n setLayout(new FlowLayout());\\n textField = new JTextField(20);\\n button = new JButton(\\\"Initial Button\\\");\\n button.addActionListener(new ActionListener() {\\n public void actionPerformed(ActionEvent ae) {\\n if (!textField.getText().equals(\\\"\\\"))\\n button.setText(textField.getText());\\n }\\n });\\n add(textField);\\n add(button);\\n setSize(400, 300);\\n setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\\n setLocationRelativeTo(null);\\n setVisible(true);\\n }\\n public static void main(String[] args) {\\n new JButtonTextChangeTest();\\n }\\n}\"\n }\n]"}}},{"rowIdx":576,"cells":{"title":{"kind":"string","value":"How to create a dictionary with list comprehension in Python?"},"text":{"kind":"string","value":"The zip() function which is an in-built function, provides a list of tuples containing elements at same indices from two lists. If two lists are keys and values respectively, this zip object can be used to constructed dictionary object using another built-in function dict()\n>>> L1=['a','b','c','d']\n>>> L2=[1,2,3,4]\n>>> d1=dict(zip(L1,L2))\n>>> d1\n{'a': 1, 'b': 2, 'c': 3, 'd': 4}\nIn Python 3.x a dictionary comprehension syntax is also available to construct dictionary from zip object\n>>> L2=[1,2,3,4]\n>>> L1=['a','b','c','d']\n>>> d={k:v for (k,v) in zip(L1,L2)}\n>>> d\n{'a': 1, 'b': 2, 'c': 3, 'd': 4}"},"parsed":{"kind":"list like","value":[{"code":null,"e":1339,"s":1062,"text":"The zip() function which is an in-built function, provides a list of tuples containing elements at same indices from two lists. If two lists are keys and values respectively, this zip object can be used to constructed dictionary object using another built-in function dict()"},{"code":null,"e":1445,"s":1339,"text":">>> L1=['a','b','c','d']\n>>> L2=[1,2,3,4]\n>>> d1=dict(zip(L1,L2))\n>>> d1\n{'a': 1, 'b': 2, 'c': 3, 'd': 4}"},{"code":null,"e":1551,"s":1445,"text":"In Python 3.x a dictionary comprehension syntax is also available to construct dictionary from zip object"},{"code":null,"e":1668,"s":1551,"text":">>> L2=[1,2,3,4]\n>>> L1=['a','b','c','d']\n>>> d={k:v for (k,v) in zip(L1,L2)}\n>>> d\n{'a': 1, 'b': 2, 'c': 3, 'd': 4}"}],"string":"[\n {\n \"code\": null,\n \"e\": 1339,\n \"s\": 1062,\n \"text\": \"The zip() function which is an in-built function, provides a list of tuples containing elements at same indices from two lists. If two lists are keys and values respectively, this zip object can be used to constructed dictionary object using another built-in function dict()\"\n },\n {\n \"code\": null,\n \"e\": 1445,\n \"s\": 1339,\n \"text\": \">>> L1=['a','b','c','d']\\n>>> L2=[1,2,3,4]\\n>>> d1=dict(zip(L1,L2))\\n>>> d1\\n{'a': 1, 'b': 2, 'c': 3, 'd': 4}\"\n },\n {\n \"code\": null,\n \"e\": 1551,\n \"s\": 1445,\n \"text\": \"In Python 3.x a dictionary comprehension syntax is also available to construct dictionary from zip object\"\n },\n {\n \"code\": null,\n \"e\": 1668,\n \"s\": 1551,\n \"text\": \">>> L2=[1,2,3,4]\\n>>> L1=['a','b','c','d']\\n>>> d={k:v for (k,v) in zip(L1,L2)}\\n>>> d\\n{'a': 1, 'b': 2, 'c': 3, 'd': 4}\"\n }\n]"}}},{"rowIdx":577,"cells":{"title":{"kind":"string","value":"Implementation of Naive Bayes Classifier with the use of Scikit-learn and ML.NET | by Robert Krzaczyński | Towards Data Science"},"text":{"kind":"string","value":"When we think about machine learning, the first languages that come to mind are Python or R. This is understandable because they provide us with many possibilities to implement these algorithms.\nHowever, I work in C# daily and my attention has been attracted by the quite fresh library that is ML.NET. In this article, I would like to show how to implement the Naive Bayes Classifier in Python language using Scikit-learn, and also in C# with the use of the mentioned earlier ML.NET.\nNaive Bayes Classifier\nNaive Bayes classifier is a simple, probabilistic classifier that assumes mutual independence of independent variables. It is based on the Bayes’ theorem, which is expressed mathematically as follows:\nDataset\nI used the wine quality dataset from the UCI Machine Learning Repository for the experiment. The analyzed data set has 11 features and 11 classes. The classes determine the quality of the wine in the numerical range 0–10.\nML.NET\nThe first step is to create a console application project. Then you have to download the ML.NET library from NuGet Packages. Now you can create classes that correspond to the attributes in the dataset. Created classes are shown in the listing:\nThen you can go on to load the dataset and divide it into a training set and a testing set. I have adopted a standard structure here, i.e. 80% of the data is a training set, while the rest is a testing set.\nvar dataPath = \"../../../winequality-red.csv\";var ml = new MLContext();var DataView = ml.Data.LoadFromTextFile(dataPath, hasHeader: true, separatorChar: ';');\nNow it is necessary to adapt the model structure to the standards adopted by the ML.NET library. This means that the property specifying the class must be called Label. The remaining attributes must be condensed under the name Features.\nvar partitions = ml.Data.TrainTestSplit( DataView, testFraction: 0.3);var pipeline = ml.Transforms.Conversion.MapValueToKey(inputColumnName: \"Quality\", outputColumnName: \"Label\").Append(ml.Transforms.Concatenate(\"Features\", \"FixedAcidity\", \"VolatileAcidity\",\"CitricAcid\", \"ResidualSugar\", \"Chlorides\", \"FreeSulfurDioxide\", \"TotalSulfurDioxide\",\"Density\", \"Ph\", \"Sulphates\", \"Alcohol\")).AppendCacheCheckpoint(ml);\nOnce you have completed the previous steps, you can move on to creating a training pipeline. Here you choose a classifier in the form of Naive Bayes Classifier, to which you specify in the parameters the column names of the label and features. You indicate the property that means the predicted label too.\nvar trainingPipeline = pipeline.Append(ml.MulticlassClassification.Trainers.NaiveBayes(\"Label\",\"Features\")).Append(ml.Transforms.Conversion.MapKeyToValue(\"PredictedLabel\"));\nFinally, you can move on to training and testing the model. Everything is closed in two lines of code.\nvar trainedModel = trainingPipeline.Fit(partitions.TrainSet);var testMetrics = ml.MulticlassClassification.Evaluate(trainedModel.Transform(partitions.TestSet));\nScikit-learn\nIn the case of Python implementation, we also start with the handling of dataset files. We use numpy and pandas libraries for this. In the listing, you can see functions that are used to retrieve data from the file and create ndarray from it, which will then be used for the algorithm.\nfrom sklearn.naive_bayes import GaussianNBfrom common.import_data import ImportDatafrom sklearn.model_selection import train_test_splitif __name__ == \"__main__\":data_set = ImportData()x = data_set.import_all_data()y = data_set.import_columns(np.array(['quality']))\nThe next step is to create a training and test set. In this case, we also use a 20% division for the test set and 80% for the training set. I used the train_test_split function, which comes from the library sklearn.\nX_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2)\nNow you can move on to the Naive Bayes Classifier. In this case, training and testing are also closed in a few lines of code.\nNB = GaussianNB()NB.fit(X_train, y_train.ravel())predictions = NB.predict(X_test)print('Scores from each Iteration: ', NB.score(X_test, y_test))\nResults and summary\nThe accuracy of the Naive Bayes Classifier for Scikit-learn implementation was 56.5%, while for ML.NET it was 41.5%. The difference may be due to other ways of algorithm implementation, but based on the accuracy alone we cannot say which is better. However, we can say that a promising alternative to the implementation of machine learning algorithms is beginning to emerge, which is the use of C# and ML.NET."},"parsed":{"kind":"list like","value":[{"code":null,"e":366,"s":171,"text":"When we think about machine learning, the first languages that come to mind are Python or R. This is understandable because they provide us with many possibilities to implement these algorithms."},{"code":null,"e":655,"s":366,"text":"However, I work in C# daily and my attention has been attracted by the quite fresh library that is ML.NET. In this article, I would like to show how to implement the Naive Bayes Classifier in Python language using Scikit-learn, and also in C# with the use of the mentioned earlier ML.NET."},{"code":null,"e":678,"s":655,"text":"Naive Bayes Classifier"},{"code":null,"e":879,"s":678,"text":"Naive Bayes classifier is a simple, probabilistic classifier that assumes mutual independence of independent variables. It is based on the Bayes’ theorem, which is expressed mathematically as follows:"},{"code":null,"e":887,"s":879,"text":"Dataset"},{"code":null,"e":1109,"s":887,"text":"I used the wine quality dataset from the UCI Machine Learning Repository for the experiment. The analyzed data set has 11 features and 11 classes. The classes determine the quality of the wine in the numerical range 0–10."},{"code":null,"e":1116,"s":1109,"text":"ML.NET"},{"code":null,"e":1360,"s":1116,"text":"The first step is to create a console application project. Then you have to download the ML.NET library from NuGet Packages. Now you can create classes that correspond to the attributes in the dataset. Created classes are shown in the listing:"},{"code":null,"e":1567,"s":1360,"text":"Then you can go on to load the dataset and divide it into a training set and a testing set. I have adopted a standard structure here, i.e. 80% of the data is a training set, while the rest is a testing set."},{"code":null,"e":1736,"s":1567,"text":"var dataPath = \"../../../winequality-red.csv\";var ml = new MLContext();var DataView = ml.Data.LoadFromTextFile(dataPath, hasHeader: true, separatorChar: ';');"},{"code":null,"e":1973,"s":1736,"text":"Now it is necessary to adapt the model structure to the standards adopted by the ML.NET library. This means that the property specifying the class must be called Label. The remaining attributes must be condensed under the name Features."},{"code":null,"e":2386,"s":1973,"text":"var partitions = ml.Data.TrainTestSplit( DataView, testFraction: 0.3);var pipeline = ml.Transforms.Conversion.MapValueToKey(inputColumnName: \"Quality\", outputColumnName: \"Label\").Append(ml.Transforms.Concatenate(\"Features\", \"FixedAcidity\", \"VolatileAcidity\",\"CitricAcid\", \"ResidualSugar\", \"Chlorides\", \"FreeSulfurDioxide\", \"TotalSulfurDioxide\",\"Density\", \"Ph\", \"Sulphates\", \"Alcohol\")).AppendCacheCheckpoint(ml);"},{"code":null,"e":2692,"s":2386,"text":"Once you have completed the previous steps, you can move on to creating a training pipeline. Here you choose a classifier in the form of Naive Bayes Classifier, to which you specify in the parameters the column names of the label and features. You indicate the property that means the predicted label too."},{"code":null,"e":2866,"s":2692,"text":"var trainingPipeline = pipeline.Append(ml.MulticlassClassification.Trainers.NaiveBayes(\"Label\",\"Features\")).Append(ml.Transforms.Conversion.MapKeyToValue(\"PredictedLabel\"));"},{"code":null,"e":2969,"s":2866,"text":"Finally, you can move on to training and testing the model. Everything is closed in two lines of code."},{"code":null,"e":3130,"s":2969,"text":"var trainedModel = trainingPipeline.Fit(partitions.TrainSet);var testMetrics = ml.MulticlassClassification.Evaluate(trainedModel.Transform(partitions.TestSet));"},{"code":null,"e":3143,"s":3130,"text":"Scikit-learn"},{"code":null,"e":3429,"s":3143,"text":"In the case of Python implementation, we also start with the handling of dataset files. We use numpy and pandas libraries for this. In the listing, you can see functions that are used to retrieve data from the file and create ndarray from it, which will then be used for the algorithm."},{"code":null,"e":3694,"s":3429,"text":"from sklearn.naive_bayes import GaussianNBfrom common.import_data import ImportDatafrom sklearn.model_selection import train_test_splitif __name__ == \"__main__\":data_set = ImportData()x = data_set.import_all_data()y = data_set.import_columns(np.array(['quality']))"},{"code":null,"e":3910,"s":3694,"text":"The next step is to create a training and test set. In this case, we also use a 20% division for the test set and 80% for the training set. I used the train_test_split function, which comes from the library sklearn."},{"code":null,"e":3983,"s":3910,"text":"X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2)"},{"code":null,"e":4109,"s":3983,"text":"Now you can move on to the Naive Bayes Classifier. In this case, training and testing are also closed in a few lines of code."},{"code":null,"e":4254,"s":4109,"text":"NB = GaussianNB()NB.fit(X_train, y_train.ravel())predictions = NB.predict(X_test)print('Scores from each Iteration: ', NB.score(X_test, y_test))"},{"code":null,"e":4274,"s":4254,"text":"Results and summary"}],"string":"[\n {\n \"code\": null,\n \"e\": 366,\n \"s\": 171,\n \"text\": \"When we think about machine learning, the first languages that come to mind are Python or R. This is understandable because they provide us with many possibilities to implement these algorithms.\"\n },\n {\n \"code\": null,\n \"e\": 655,\n \"s\": 366,\n \"text\": \"However, I work in C# daily and my attention has been attracted by the quite fresh library that is ML.NET. In this article, I would like to show how to implement the Naive Bayes Classifier in Python language using Scikit-learn, and also in C# with the use of the mentioned earlier ML.NET.\"\n },\n {\n \"code\": null,\n \"e\": 678,\n \"s\": 655,\n \"text\": \"Naive Bayes Classifier\"\n },\n {\n \"code\": null,\n \"e\": 879,\n \"s\": 678,\n \"text\": \"Naive Bayes classifier is a simple, probabilistic classifier that assumes mutual independence of independent variables. It is based on the Bayes’ theorem, which is expressed mathematically as follows:\"\n },\n {\n \"code\": null,\n \"e\": 887,\n \"s\": 879,\n \"text\": \"Dataset\"\n },\n {\n \"code\": null,\n \"e\": 1109,\n \"s\": 887,\n \"text\": \"I used the wine quality dataset from the UCI Machine Learning Repository for the experiment. The analyzed data set has 11 features and 11 classes. The classes determine the quality of the wine in the numerical range 0–10.\"\n },\n {\n \"code\": null,\n \"e\": 1116,\n \"s\": 1109,\n \"text\": \"ML.NET\"\n },\n {\n \"code\": null,\n \"e\": 1360,\n \"s\": 1116,\n \"text\": \"The first step is to create a console application project. Then you have to download the ML.NET library from NuGet Packages. Now you can create classes that correspond to the attributes in the dataset. Created classes are shown in the listing:\"\n },\n {\n \"code\": null,\n \"e\": 1567,\n \"s\": 1360,\n \"text\": \"Then you can go on to load the dataset and divide it into a training set and a testing set. I have adopted a standard structure here, i.e. 80% of the data is a training set, while the rest is a testing set.\"\n },\n {\n \"code\": null,\n \"e\": 1736,\n \"s\": 1567,\n \"text\": \"var dataPath = \\\"../../../winequality-red.csv\\\";var ml = new MLContext();var DataView = ml.Data.LoadFromTextFile(dataPath, hasHeader: true, separatorChar: ';');\"\n },\n {\n \"code\": null,\n \"e\": 1973,\n \"s\": 1736,\n \"text\": \"Now it is necessary to adapt the model structure to the standards adopted by the ML.NET library. This means that the property specifying the class must be called Label. The remaining attributes must be condensed under the name Features.\"\n },\n {\n \"code\": null,\n \"e\": 2386,\n \"s\": 1973,\n \"text\": \"var partitions = ml.Data.TrainTestSplit( DataView, testFraction: 0.3);var pipeline = ml.Transforms.Conversion.MapValueToKey(inputColumnName: \\\"Quality\\\", outputColumnName: \\\"Label\\\").Append(ml.Transforms.Concatenate(\\\"Features\\\", \\\"FixedAcidity\\\", \\\"VolatileAcidity\\\",\\\"CitricAcid\\\", \\\"ResidualSugar\\\", \\\"Chlorides\\\", \\\"FreeSulfurDioxide\\\", \\\"TotalSulfurDioxide\\\",\\\"Density\\\", \\\"Ph\\\", \\\"Sulphates\\\", \\\"Alcohol\\\")).AppendCacheCheckpoint(ml);\"\n },\n {\n \"code\": null,\n \"e\": 2692,\n \"s\": 2386,\n \"text\": \"Once you have completed the previous steps, you can move on to creating a training pipeline. Here you choose a classifier in the form of Naive Bayes Classifier, to which you specify in the parameters the column names of the label and features. You indicate the property that means the predicted label too.\"\n },\n {\n \"code\": null,\n \"e\": 2866,\n \"s\": 2692,\n \"text\": \"var trainingPipeline = pipeline.Append(ml.MulticlassClassification.Trainers.NaiveBayes(\\\"Label\\\",\\\"Features\\\")).Append(ml.Transforms.Conversion.MapKeyToValue(\\\"PredictedLabel\\\"));\"\n },\n {\n \"code\": null,\n \"e\": 2969,\n \"s\": 2866,\n \"text\": \"Finally, you can move on to training and testing the model. Everything is closed in two lines of code.\"\n },\n {\n \"code\": null,\n \"e\": 3130,\n \"s\": 2969,\n \"text\": \"var trainedModel = trainingPipeline.Fit(partitions.TrainSet);var testMetrics = ml.MulticlassClassification.Evaluate(trainedModel.Transform(partitions.TestSet));\"\n },\n {\n \"code\": null,\n \"e\": 3143,\n \"s\": 3130,\n \"text\": \"Scikit-learn\"\n },\n {\n \"code\": null,\n \"e\": 3429,\n \"s\": 3143,\n \"text\": \"In the case of Python implementation, we also start with the handling of dataset files. We use numpy and pandas libraries for this. In the listing, you can see functions that are used to retrieve data from the file and create ndarray from it, which will then be used for the algorithm.\"\n },\n {\n \"code\": null,\n \"e\": 3694,\n \"s\": 3429,\n \"text\": \"from sklearn.naive_bayes import GaussianNBfrom common.import_data import ImportDatafrom sklearn.model_selection import train_test_splitif __name__ == \\\"__main__\\\":data_set = ImportData()x = data_set.import_all_data()y = data_set.import_columns(np.array(['quality']))\"\n },\n {\n \"code\": null,\n \"e\": 3910,\n \"s\": 3694,\n \"text\": \"The next step is to create a training and test set. In this case, we also use a 20% division for the test set and 80% for the training set. I used the train_test_split function, which comes from the library sklearn.\"\n },\n {\n \"code\": null,\n \"e\": 3983,\n \"s\": 3910,\n \"text\": \"X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.2)\"\n },\n {\n \"code\": null,\n \"e\": 4109,\n \"s\": 3983,\n \"text\": \"Now you can move on to the Naive Bayes Classifier. In this case, training and testing are also closed in a few lines of code.\"\n },\n {\n \"code\": null,\n \"e\": 4254,\n \"s\": 4109,\n \"text\": \"NB = GaussianNB()NB.fit(X_train, y_train.ravel())predictions = NB.predict(X_test)print('Scores from each Iteration: ', NB.score(X_test, y_test))\"\n },\n {\n \"code\": null,\n \"e\": 4274,\n \"s\": 4254,\n \"text\": \"Results and summary\"\n }\n]"}}},{"rowIdx":578,"cells":{"title":{"kind":"string","value":"Primary Keys and Group By’s — A Brief SQL Investigation | by Jeremy Chow | Towards Data Science"},"text":{"kind":"string","value":"Today an acquaintance asked me an interesting SQL question, but not in the typical query sense; his question revolved more around an understanding of the underlying framework of SQL. Here is the context:\nThe exercise comes from this PostgreSQL exercise page and has the following schema:\nThe SQL question the website is asking isn’t important, but their posted solution is this:\nSELECT facs.name AS name, facs.initialoutlay/((sum( CASE WHEN memid = 0 THEN slots * facs.guestcost ELSE slots * membercost END)/3) - facs.monthlymaintenance) AS months FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.facid ORDER BY name;\nMy friend tried switching the line GROUP BY facs.facid to GROUP BY facs.name, which broke the query with the message:\ncolumn \"facs.initialoutlay\" must appear in the GROUP BY clause or be used in an aggregate function\nThe question my friend asked was :\nWhy does the above query not work with the lines switched, even if both columns are both unique to each row?\nIf you know the answer, congrats, you should answer my question posted at the end! If you’d like to skip to the answer, scroll to ‘The Answer’ section of this article. Otherwise, let’s go into the thought process of solving this question!\nFirst, let’s check for the obvious — Are those columns really unique, and are there multiple combinations of the columns used in our query (name, facid, initialoutlay, monthlymaintenance)? To check this we look at distinct combinations of those columns in the facilities table.\nSELECT DISTINCT facs.name as name, facs.facid, facs.initialoutlay, facs.monthlymaintenance FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facidORDER BY facid;Output:╔═════════════════╦═══════╦═══════════════╦════════════════════╗║ name ║ facid ║ initialoutlay ║ monthlymaintenance ║╠═════════════════╬═══════╬═══════════════╬════════════════════╣║ Tennis Court 1 ║ 0 ║ 10000 ║ 200 ║║ Tennis Court 2 ║ 1 ║ 8000 ║ 200 ║║ Badminton Court ║ 2 ║ 4000 ║ 50 ║║ Table Tennis ║ 3 ║ 320 ║ 10 ║║ Massage Room 1 ║ 4 ║ 4000 ║ 3000 ║║ Massage Room 2 ║ 5 ║ 4000 ║ 3000 ║║ Squash Court ║ 6 ║ 5000 ║ 80 ║║ Snooker Table ║ 7 ║ 450 ║ 15 ║║ Pool Table ║ 8 ║ 400 ║ 15 ║╚═════════════════╩═══════╩═══════════════╩════════════════════╝\nName and facid are both unique to each row and 1 to 1, and each pair only has one initialoutlay and monthlymaintenance value. Intuitively, grouping by either of those two columns should be functionally equivalent to grouping by the other. So why doesn’t grouping by name work for this query?\nAs you may intuit if you’re familiar with SQL, this is a primary key issue. For those that don’t know, a primary key is the value that uniquely identifies each row of a table, and it can never be ‘NULL’ for a row. But how do we find the designated primary key of a table?\nA quick google search gives us the following code from the PostgreSQL wiki. Running this on the website’s query section gives the following:\nSELECT a.attname, format_type(a.atttypid, a.atttypmod) AS data_typeFROM pg_index iJOIN pg_attribute a ON a.attrelid = i.indrelid AND a.attnum = ANY(i.indkey)WHERE i.indrelid = 'cd.facilities'::regclassAND i.indisprimary;Output:╔═════════╦═══════════╗║ attname ║ data_type ║╠═════════╬═══════════╣║ facid ║ integer ║╚═════════╩═══════════╝\nSo facid is a primary key of the facilities table! Now we’ve confirmed the probable cause, but what is the reason grouping by the primary key allows you to throw in columns with no aggregate function like we do with facs.initialoutlay and facs.monthlymaintenance?\nSELECT facs.name AS name, facs.initialoutlay/((sum( /* <=========== */ CASE WHEN memid = 0 THEN slots * facs.guestcost ELSE slots * membercost END)/3) - facs.monthlymaintenance) AS months /* <=========== */FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.facid ORDER BY name;/* Shouldn't these two columns be inside of an aggregation? */\nTo answer this question we look at the PostgreSQL help docs, specifically for GROUP BY :\nWhen GROUP BY is present, or any aggregate functions are present, it is not valid for the SELECT list expressions to refer to ungrouped columns except within aggregate functions or when the ungrouped column is functionally dependent on the grouped columns, since there would otherwise be more than one possible value to return for an ungrouped column. A functional dependency exists if the grouped columns (or a subset thereof) are the primary key of the table containing the ungrouped column.\nAs a Stack Overflow user Tony L. puts it:\nGrouping by primary key results in a single record in each group which is logically the same as not grouping at all / grouping by all columns, therefore we can select all other columns.\nEssentially this means grouping by the primary key of a table results in no change in rows to that table, therefore if we group by the primary key of a table, we can call on all columns of that table with no aggregate function.\nLet’s reiterate: Given that we’re looking at one table, grouping by its primary key is the same as grouping by everything, which is the same as not grouping at all — each of these approaches will result in one group per row. Once you understand that, you understand the crux of this problem.\nBecause of this, queries like this work:\n1. Group by everything:SELECT *FROM cd.facilities fGROUP BY facid, name, membercost, guestcost, initialoutlay, monthlymaintenanceLIMIT 5OUTPUT:\nwhich is functionally identical to\n2. Don't group by anythingSELECT * FROM cd.facilities fLIMIT 5and 3. Group by primary key but don't aggregateSELECT * FROM cd.facilities fGROUP BY facidLIMIT 5\nThese all output the same values! Now we have our solution.\nThe reason the first query works is simply that facid is the primary key and name is not. Despite them both being unique to each row, the table facilities was created with facid as the primary key, thus it gets special treatment when used in a group by as covered above.\nSome alternative solutions to what they posted would be:\n1. Group by name then aggregate everything elseSELECT facs.name as name, AVG(facs.initialoutlay)/((sum(case when memid = 0 then slots * facs.guestcost else slots * membercost end)/3) - AVG(facs.monthlymaintenance) as months FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.nameORDER BY name;Why this works:Because facs.name is unique to each row in the facilities table just as facid was, we group by facs.name then add AVG calls around previously unaggregated facilities columns.2. Group by all facility columns used in the select statementSELECT facs.name as name, facs.initialoutlay/((sum(case when memid = 0 then slots * facs.guestcost else slots * membercost end)/3) - facs.monthlymaintenance) as months from cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.name, facs.initialoutlay, facs.guestcost, facs.monthlymaintenance ORDER BY name;Why this works:This includes all the values used in the SELECT statement in the GROUP BY, which is the normal GROUP BY logic and syntax.\nThat concludes the main question, but if you are interested in exploring more quirks, I ran into the following issue while solving the posted problem. If any of you know why feel free to ping me or leave a comment!\n/* This works (manually lists all columns in the group by)*/SELECT *FROM cd.facilities fGROUP BY facid, name, membercost, guestcost, initialoutlay, monthlymaintenanceLIMIT 5/* This does not (selecting all columns using f.*) */SELECT *FROM cd.facilities fGROUP BY f.*LIMIT 5\nThanks for reading, I hope this article helped you out! Feel free to check out my other tutorials and stay posted for new ones in the future!"},"parsed":{"kind":"list like","value":[{"code":null,"e":376,"s":172,"text":"Today an acquaintance asked me an interesting SQL question, but not in the typical query sense; his question revolved more around an understanding of the underlying framework of SQL. Here is the context:"},{"code":null,"e":460,"s":376,"text":"The exercise comes from this PostgreSQL exercise page and has the following schema:"},{"code":null,"e":551,"s":460,"text":"The SQL question the website is asking isn’t important, but their posted solution is this:"},{"code":null,"e":842,"s":551,"text":"SELECT facs.name AS name, facs.initialoutlay/((sum( CASE WHEN memid = 0 THEN slots * facs.guestcost ELSE slots * membercost END)/3) - facs.monthlymaintenance) AS months FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.facid ORDER BY name;"},{"code":null,"e":960,"s":842,"text":"My friend tried switching the line GROUP BY facs.facid to GROUP BY facs.name, which broke the query with the message:"},{"code":null,"e":1059,"s":960,"text":"column \"facs.initialoutlay\" must appear in the GROUP BY clause or be used in an aggregate function"},{"code":null,"e":1094,"s":1059,"text":"The question my friend asked was :"},{"code":null,"e":1203,"s":1094,"text":"Why does the above query not work with the lines switched, even if both columns are both unique to each row?"},{"code":null,"e":1442,"s":1203,"text":"If you know the answer, congrats, you should answer my question posted at the end! If you’d like to skip to the answer, scroll to ‘The Answer’ section of this article. Otherwise, let’s go into the thought process of solving this question!"},{"code":null,"e":1720,"s":1442,"text":"First, let’s check for the obvious — Are those columns really unique, and are there multiple combinations of the columns used in our query (name, facid, initialoutlay, monthlymaintenance)? To check this we look at distinct combinations of those columns in the facilities table."},{"code":null,"e":2755,"s":1720,"text":"SELECT DISTINCT facs.name as name, facs.facid, facs.initialoutlay, facs.monthlymaintenance FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facidORDER BY facid;Output:╔═════════════════╦═══════╦═══════════════╦════════════════════╗║ name ║ facid ║ initialoutlay ║ monthlymaintenance ║╠═════════════════╬═══════╬═══════════════╬════════════════════╣║ Tennis Court 1 ║ 0 ║ 10000 ║ 200 ║║ Tennis Court 2 ║ 1 ║ 8000 ║ 200 ║║ Badminton Court ║ 2 ║ 4000 ║ 50 ║║ Table Tennis ║ 3 ║ 320 ║ 10 ║║ Massage Room 1 ║ 4 ║ 4000 ║ 3000 ║║ Massage Room 2 ║ 5 ║ 4000 ║ 3000 ║║ Squash Court ║ 6 ║ 5000 ║ 80 ║║ Snooker Table ║ 7 ║ 450 ║ 15 ║║ Pool Table ║ 8 ║ 400 ║ 15 ║╚═════════════════╩═══════╩═══════════════╩════════════════════╝"},{"code":null,"e":3047,"s":2755,"text":"Name and facid are both unique to each row and 1 to 1, and each pair only has one initialoutlay and monthlymaintenance value. Intuitively, grouping by either of those two columns should be functionally equivalent to grouping by the other. So why doesn’t grouping by name work for this query?"},{"code":null,"e":3319,"s":3047,"text":"As you may intuit if you’re familiar with SQL, this is a primary key issue. For those that don’t know, a primary key is the value that uniquely identifies each row of a table, and it can never be ‘NULL’ for a row. But how do we find the designated primary key of a table?"},{"code":null,"e":3460,"s":3319,"text":"A quick google search gives us the following code from the PostgreSQL wiki. Running this on the website’s query section gives the following:"},{"code":null,"e":3831,"s":3460,"text":"SELECT a.attname, format_type(a.atttypid, a.atttypmod) AS data_typeFROM pg_index iJOIN pg_attribute a ON a.attrelid = i.indrelid AND a.attnum = ANY(i.indkey)WHERE i.indrelid = 'cd.facilities'::regclassAND i.indisprimary;Output:╔═════════╦═══════════╗║ attname ║ data_type ║╠═════════╬═══════════╣║ facid ║ integer ║╚═════════╩═══════════╝"},{"code":null,"e":4095,"s":3831,"text":"So facid is a primary key of the facilities table! Now we’ve confirmed the probable cause, but what is the reason grouping by the primary key allows you to throw in columns with no aggregate function like we do with facs.initialoutlay and facs.monthlymaintenance?"},{"code":null,"e":4491,"s":4095,"text":"SELECT facs.name AS name, facs.initialoutlay/((sum( /* <=========== */ CASE WHEN memid = 0 THEN slots * facs.guestcost ELSE slots * membercost END)/3) - facs.monthlymaintenance) AS months /* <=========== */FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.facid ORDER BY name;/* Shouldn't these two columns be inside of an aggregation? */"},{"code":null,"e":4580,"s":4491,"text":"To answer this question we look at the PostgreSQL help docs, specifically for GROUP BY :"},{"code":null,"e":5074,"s":4580,"text":"When GROUP BY is present, or any aggregate functions are present, it is not valid for the SELECT list expressions to refer to ungrouped columns except within aggregate functions or when the ungrouped column is functionally dependent on the grouped columns, since there would otherwise be more than one possible value to return for an ungrouped column. A functional dependency exists if the grouped columns (or a subset thereof) are the primary key of the table containing the ungrouped column."},{"code":null,"e":5116,"s":5074,"text":"As a Stack Overflow user Tony L. puts it:"},{"code":null,"e":5302,"s":5116,"text":"Grouping by primary key results in a single record in each group which is logically the same as not grouping at all / grouping by all columns, therefore we can select all other columns."},{"code":null,"e":5530,"s":5302,"text":"Essentially this means grouping by the primary key of a table results in no change in rows to that table, therefore if we group by the primary key of a table, we can call on all columns of that table with no aggregate function."},{"code":null,"e":5822,"s":5530,"text":"Let’s reiterate: Given that we’re looking at one table, grouping by its primary key is the same as grouping by everything, which is the same as not grouping at all — each of these approaches will result in one group per row. Once you understand that, you understand the crux of this problem."},{"code":null,"e":5863,"s":5822,"text":"Because of this, queries like this work:"},{"code":null,"e":6007,"s":5863,"text":"1. Group by everything:SELECT *FROM cd.facilities fGROUP BY facid, name, membercost, guestcost, initialoutlay, monthlymaintenanceLIMIT 5OUTPUT:"},{"code":null,"e":6042,"s":6007,"text":"which is functionally identical to"},{"code":null,"e":6202,"s":6042,"text":"2. Don't group by anythingSELECT * FROM cd.facilities fLIMIT 5and 3. Group by primary key but don't aggregateSELECT * FROM cd.facilities fGROUP BY facidLIMIT 5"},{"code":null,"e":6262,"s":6202,"text":"These all output the same values! Now we have our solution."},{"code":null,"e":6533,"s":6262,"text":"The reason the first query works is simply that facid is the primary key and name is not. Despite them both being unique to each row, the table facilities was created with facid as the primary key, thus it gets special treatment when used in a group by as covered above."},{"code":null,"e":6590,"s":6533,"text":"Some alternative solutions to what they posted would be:"},{"code":null,"e":7669,"s":6590,"text":"1. Group by name then aggregate everything elseSELECT facs.name as name, AVG(facs.initialoutlay)/((sum(case when memid = 0 then slots * facs.guestcost else slots * membercost end)/3) - AVG(facs.monthlymaintenance) as months FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.nameORDER BY name;Why this works:Because facs.name is unique to each row in the facilities table just as facid was, we group by facs.name then add AVG calls around previously unaggregated facilities columns.2. Group by all facility columns used in the select statementSELECT facs.name as name, facs.initialoutlay/((sum(case when memid = 0 then slots * facs.guestcost else slots * membercost end)/3) - facs.monthlymaintenance) as months from cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.name, facs.initialoutlay, facs.guestcost, facs.monthlymaintenance ORDER BY name;Why this works:This includes all the values used in the SELECT statement in the GROUP BY, which is the normal GROUP BY logic and syntax."},{"code":null,"e":7884,"s":7669,"text":"That concludes the main question, but if you are interested in exploring more quirks, I ran into the following issue while solving the posted problem. If any of you know why feel free to ping me or leave a comment!"},{"code":null,"e":8158,"s":7884,"text":"/* This works (manually lists all columns in the group by)*/SELECT *FROM cd.facilities fGROUP BY facid, name, membercost, guestcost, initialoutlay, monthlymaintenanceLIMIT 5/* This does not (selecting all columns using f.*) */SELECT *FROM cd.facilities fGROUP BY f.*LIMIT 5"}],"string":"[\n {\n \"code\": null,\n \"e\": 376,\n \"s\": 172,\n \"text\": \"Today an acquaintance asked me an interesting SQL question, but not in the typical query sense; his question revolved more around an understanding of the underlying framework of SQL. Here is the context:\"\n },\n {\n \"code\": null,\n \"e\": 460,\n \"s\": 376,\n \"text\": \"The exercise comes from this PostgreSQL exercise page and has the following schema:\"\n },\n {\n \"code\": null,\n \"e\": 551,\n \"s\": 460,\n \"text\": \"The SQL question the website is asking isn’t important, but their posted solution is this:\"\n },\n {\n \"code\": null,\n \"e\": 842,\n \"s\": 551,\n \"text\": \"SELECT facs.name AS name, facs.initialoutlay/((sum( CASE WHEN memid = 0 THEN slots * facs.guestcost ELSE slots * membercost END)/3) - facs.monthlymaintenance) AS months FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.facid ORDER BY name;\"\n },\n {\n \"code\": null,\n \"e\": 960,\n \"s\": 842,\n \"text\": \"My friend tried switching the line GROUP BY facs.facid to GROUP BY facs.name, which broke the query with the message:\"\n },\n {\n \"code\": null,\n \"e\": 1059,\n \"s\": 960,\n \"text\": \"column \\\"facs.initialoutlay\\\" must appear in the GROUP BY clause or be used in an aggregate function\"\n },\n {\n \"code\": null,\n \"e\": 1094,\n \"s\": 1059,\n \"text\": \"The question my friend asked was :\"\n },\n {\n \"code\": null,\n \"e\": 1203,\n \"s\": 1094,\n \"text\": \"Why does the above query not work with the lines switched, even if both columns are both unique to each row?\"\n },\n {\n \"code\": null,\n \"e\": 1442,\n \"s\": 1203,\n \"text\": \"If you know the answer, congrats, you should answer my question posted at the end! If you’d like to skip to the answer, scroll to ‘The Answer’ section of this article. Otherwise, let’s go into the thought process of solving this question!\"\n },\n {\n \"code\": null,\n \"e\": 1720,\n \"s\": 1442,\n \"text\": \"First, let’s check for the obvious — Are those columns really unique, and are there multiple combinations of the columns used in our query (name, facid, initialoutlay, monthlymaintenance)? To check this we look at distinct combinations of those columns in the facilities table.\"\n },\n {\n \"code\": null,\n \"e\": 2755,\n \"s\": 1720,\n \"text\": \"SELECT DISTINCT facs.name as name, facs.facid, facs.initialoutlay, facs.monthlymaintenance FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facidORDER BY facid;Output:╔═════════════════╦═══════╦═══════════════╦════════════════════╗║ name ║ facid ║ initialoutlay ║ monthlymaintenance ║╠═════════════════╬═══════╬═══════════════╬════════════════════╣║ Tennis Court 1 ║ 0 ║ 10000 ║ 200 ║║ Tennis Court 2 ║ 1 ║ 8000 ║ 200 ║║ Badminton Court ║ 2 ║ 4000 ║ 50 ║║ Table Tennis ║ 3 ║ 320 ║ 10 ║║ Massage Room 1 ║ 4 ║ 4000 ║ 3000 ║║ Massage Room 2 ║ 5 ║ 4000 ║ 3000 ║║ Squash Court ║ 6 ║ 5000 ║ 80 ║║ Snooker Table ║ 7 ║ 450 ║ 15 ║║ Pool Table ║ 8 ║ 400 ║ 15 ║╚═════════════════╩═══════╩═══════════════╩════════════════════╝\"\n },\n {\n \"code\": null,\n \"e\": 3047,\n \"s\": 2755,\n \"text\": \"Name and facid are both unique to each row and 1 to 1, and each pair only has one initialoutlay and monthlymaintenance value. Intuitively, grouping by either of those two columns should be functionally equivalent to grouping by the other. So why doesn’t grouping by name work for this query?\"\n },\n {\n \"code\": null,\n \"e\": 3319,\n \"s\": 3047,\n \"text\": \"As you may intuit if you’re familiar with SQL, this is a primary key issue. For those that don’t know, a primary key is the value that uniquely identifies each row of a table, and it can never be ‘NULL’ for a row. But how do we find the designated primary key of a table?\"\n },\n {\n \"code\": null,\n \"e\": 3460,\n \"s\": 3319,\n \"text\": \"A quick google search gives us the following code from the PostgreSQL wiki. Running this on the website’s query section gives the following:\"\n },\n {\n \"code\": null,\n \"e\": 3831,\n \"s\": 3460,\n \"text\": \"SELECT a.attname, format_type(a.atttypid, a.atttypmod) AS data_typeFROM pg_index iJOIN pg_attribute a ON a.attrelid = i.indrelid AND a.attnum = ANY(i.indkey)WHERE i.indrelid = 'cd.facilities'::regclassAND i.indisprimary;Output:╔═════════╦═══════════╗║ attname ║ data_type ║╠═════════╬═══════════╣║ facid ║ integer ║╚═════════╩═══════════╝\"\n },\n {\n \"code\": null,\n \"e\": 4095,\n \"s\": 3831,\n \"text\": \"So facid is a primary key of the facilities table! Now we’ve confirmed the probable cause, but what is the reason grouping by the primary key allows you to throw in columns with no aggregate function like we do with facs.initialoutlay and facs.monthlymaintenance?\"\n },\n {\n \"code\": null,\n \"e\": 4491,\n \"s\": 4095,\n \"text\": \"SELECT facs.name AS name, facs.initialoutlay/((sum( /* <=========== */ CASE WHEN memid = 0 THEN slots * facs.guestcost ELSE slots * membercost END)/3) - facs.monthlymaintenance) AS months /* <=========== */FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.facid ORDER BY name;/* Shouldn't these two columns be inside of an aggregation? */\"\n },\n {\n \"code\": null,\n \"e\": 4580,\n \"s\": 4491,\n \"text\": \"To answer this question we look at the PostgreSQL help docs, specifically for GROUP BY :\"\n },\n {\n \"code\": null,\n \"e\": 5074,\n \"s\": 4580,\n \"text\": \"When GROUP BY is present, or any aggregate functions are present, it is not valid for the SELECT list expressions to refer to ungrouped columns except within aggregate functions or when the ungrouped column is functionally dependent on the grouped columns, since there would otherwise be more than one possible value to return for an ungrouped column. A functional dependency exists if the grouped columns (or a subset thereof) are the primary key of the table containing the ungrouped column.\"\n },\n {\n \"code\": null,\n \"e\": 5116,\n \"s\": 5074,\n \"text\": \"As a Stack Overflow user Tony L. puts it:\"\n },\n {\n \"code\": null,\n \"e\": 5302,\n \"s\": 5116,\n \"text\": \"Grouping by primary key results in a single record in each group which is logically the same as not grouping at all / grouping by all columns, therefore we can select all other columns.\"\n },\n {\n \"code\": null,\n \"e\": 5530,\n \"s\": 5302,\n \"text\": \"Essentially this means grouping by the primary key of a table results in no change in rows to that table, therefore if we group by the primary key of a table, we can call on all columns of that table with no aggregate function.\"\n },\n {\n \"code\": null,\n \"e\": 5822,\n \"s\": 5530,\n \"text\": \"Let’s reiterate: Given that we’re looking at one table, grouping by its primary key is the same as grouping by everything, which is the same as not grouping at all — each of these approaches will result in one group per row. Once you understand that, you understand the crux of this problem.\"\n },\n {\n \"code\": null,\n \"e\": 5863,\n \"s\": 5822,\n \"text\": \"Because of this, queries like this work:\"\n },\n {\n \"code\": null,\n \"e\": 6007,\n \"s\": 5863,\n \"text\": \"1. Group by everything:SELECT *FROM cd.facilities fGROUP BY facid, name, membercost, guestcost, initialoutlay, monthlymaintenanceLIMIT 5OUTPUT:\"\n },\n {\n \"code\": null,\n \"e\": 6042,\n \"s\": 6007,\n \"text\": \"which is functionally identical to\"\n },\n {\n \"code\": null,\n \"e\": 6202,\n \"s\": 6042,\n \"text\": \"2. Don't group by anythingSELECT * FROM cd.facilities fLIMIT 5and 3. Group by primary key but don't aggregateSELECT * FROM cd.facilities fGROUP BY facidLIMIT 5\"\n },\n {\n \"code\": null,\n \"e\": 6262,\n \"s\": 6202,\n \"text\": \"These all output the same values! Now we have our solution.\"\n },\n {\n \"code\": null,\n \"e\": 6533,\n \"s\": 6262,\n \"text\": \"The reason the first query works is simply that facid is the primary key and name is not. Despite them both being unique to each row, the table facilities was created with facid as the primary key, thus it gets special treatment when used in a group by as covered above.\"\n },\n {\n \"code\": null,\n \"e\": 6590,\n \"s\": 6533,\n \"text\": \"Some alternative solutions to what they posted would be:\"\n },\n {\n \"code\": null,\n \"e\": 7669,\n \"s\": 6590,\n \"text\": \"1. Group by name then aggregate everything elseSELECT facs.name as name, AVG(facs.initialoutlay)/((sum(case when memid = 0 then slots * facs.guestcost else slots * membercost end)/3) - AVG(facs.monthlymaintenance) as months FROM cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.nameORDER BY name;Why this works:Because facs.name is unique to each row in the facilities table just as facid was, we group by facs.name then add AVG calls around previously unaggregated facilities columns.2. Group by all facility columns used in the select statementSELECT facs.name as name, facs.initialoutlay/((sum(case when memid = 0 then slots * facs.guestcost else slots * membercost end)/3) - facs.monthlymaintenance) as months from cd.bookings bks INNER JOIN cd.facilities facs ON bks.facid = facs.facid GROUP BY facs.name, facs.initialoutlay, facs.guestcost, facs.monthlymaintenance ORDER BY name;Why this works:This includes all the values used in the SELECT statement in the GROUP BY, which is the normal GROUP BY logic and syntax.\"\n },\n {\n \"code\": null,\n \"e\": 7884,\n \"s\": 7669,\n \"text\": \"That concludes the main question, but if you are interested in exploring more quirks, I ran into the following issue while solving the posted problem. If any of you know why feel free to ping me or leave a comment!\"\n },\n {\n \"code\": null,\n \"e\": 8158,\n \"s\": 7884,\n \"text\": \"/* This works (manually lists all columns in the group by)*/SELECT *FROM cd.facilities fGROUP BY facid, name, membercost, guestcost, initialoutlay, monthlymaintenanceLIMIT 5/* This does not (selecting all columns using f.*) */SELECT *FROM cd.facilities fGROUP BY f.*LIMIT 5\"\n }\n]"}}},{"rowIdx":579,"cells":{"title":{"kind":"string","value":"Excel Formulas"},"text":{"kind":"string","value":"A formula in Excel is used to do mathematical calculations. Formulas always start with the equal sign (=) typed in the cell, followed by your calculation.\nFormulas can be used for calculations such as:\n=1+1\n=2*2\n=4/2=2\nIt can also be used to calculate values using cells as input. \nLet's have a look at an example.\nType or copy the following values:\nNow we want to do a calculation with those values.\nStep by step:\n\nSelect C1 and type (=)\nRight click A1\nType (+)\nRight click A2\nPress enter\n\nSelect C1 and type (=)\nRight click A1\nType (+)\nRight click A2\nPress enter\nYou got it! You have successfully calculated A1(2) + A2(4) = C1(6).\nNote: Using cells to make calculations is an important part of Excel and you will use this alot as you learn.\nLets change from addition to multiplication, by replacing the (+) with a (*). It should now be =A1*A2, press enter to see what happens. \nYou got C1(8), right? Well done!\nExcel is great in this way. It allows you to add values to cells and make you do calculations on them.\nNow, try to change the multiplication (*) to subtraction (-) and dividing (/). \nDelete all values in the sheet after you have tried the different combinations.\nLet's add new data for the next example, where we will help the Pokemon trainers to count their Pokeballs.\nType or copy the following values:\nThe data explained:\nColumn A: Pokemon Trainers\nRow 1: Types of Pokeballs\nRange B2:D4: Amount of Pokeballs, Great balls and Ultra balls\nNote: It is important to practice reading data to understand its context. In this example you should focus on the trainers and their Pokeballs, which have three different types: Pokeball, Great ball and Ultra ball.\n\nLet's help Iva to count her Pokeballs. You find Iva in A2(Iva). The values in row 2 B2(2), C2(3), D2(1) belong to her.\nCount the Pokeballs, step by step:\n\nSelect cell E2 and type (=)\nRight click B2\nType (+)\nRight click C2\nType (+)\nRight click D2\nHit enter\n\nSelect cell E2 and type (=)\nRight click B2\nType (+)\nRight click C2\nType (+)\nRight click D2\nHit enter\nDid you get the value E2(6)? Good job! You have helped Iva to count her Pokeballs.\nNow, let's help Liam and Adora with counting theirs.\nDo you remember the fill function that we learned about earlier? It can be used to continue calculations sidewards, downwards and upwards. Let's try it!\nLets use the fill function to continue the formula, step by step:\n\nSelect E2\nFill E2:E4\n\nSelect E2\nFill E2:E4\nThat is cool, right? The fill function continued the calculation that you used for Iva and was able to understand that you wanted to count the cells in the next rows as well. \nNow we have counted the Pokeballs for all three; Iva(6), Liam(12) and Adora(15). \nLet's see how many Pokeballs Iva, Liam and Adora have in total.\nThe total is called SUM in Excel.\nThere are two ways to calculate the SUM. \nAdding cells\nSUM function\nExcel has many pre-made functions available for you to use. The SUM\nfunction is one of the most used ones. You will learn more about functions in a later chapter.\nLet's try both approaches.\nNote: You can navigate to the cells with your keyboard arrows instead of right clicking them. Try it!\nSum by adding cells, step by step:\n\nSelect cell E5, and type =\nLeft click E2\nType (+)\nLeft click E3\nType (+)\nLeft click E4\nHit enter\n\nSelect cell E5, and type =\nLeft click E2\nType (+)\nLeft click E3\nType (+)\nLeft click E4\nHit enter\nThe result is E5(33).\nLet's try the SUM function. \nRemember to delete the values that you currently have in E5.\nSUM function, step by step:\n\nType E5(=)\nWrite SUM\nDouble click SUM in the menu\nMark the range E2:E4\nHit enter\n\nType E5(=)\nWrite SUM\nDouble click SUM in the menu\nMark the range E2:E4\nHit enter\nGreat job! You have successfully calculated the SUM using the SUM function.\nIva, Liam and Adora have 33 Pokeballs in total.\nLet's change a value to see what happens. Type B2(7):\nThe value in cell B2 was changed from 2 to 7. Notice that the formulas are doing calculations when we change the value in the cells, and the SUM is updated from 33 to 38. It allows us to change values that are used by the formulas, and the calculations remain.\n\nValues used in formulas can be typed directly and by using cells. The formula updates the result if you change the value of cells, which is used in the formula. The fill function can be used to continue your formulas upwards, downwards and sidewards. Excel has pre-built functions, such as SUM. \nIn the next chapter you will learn about relative and absolute references.\nComplete the Excel formula:\n8+10\n\nStart the Exercise\nWe just launchedW3Schools videos\nGet certifiedby completinga course today!\nIf you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:\nhelp@w3schools.com\nYour message has been sent to W3Schools."},"parsed":{"kind":"list like","value":[{"code":null,"e":155,"s":0,"text":"A formula in Excel is used to do mathematical calculations. Formulas always start with the equal sign (=) typed in the cell, followed by your calculation."},{"code":null,"e":202,"s":155,"text":"Formulas can be used for calculations such as:"},{"code":null,"e":207,"s":202,"text":"=1+1"},{"code":null,"e":212,"s":207,"text":"=2*2"},{"code":null,"e":219,"s":212,"text":"=4/2=2"},{"code":null,"e":282,"s":219,"text":"It can also be used to calculate values using cells as input. "},{"code":null,"e":315,"s":282,"text":"Let's have a look at an example."},{"code":null,"e":350,"s":315,"text":"Type or copy the following values:"},{"code":null,"e":401,"s":350,"text":"Now we want to do a calculation with those values."},{"code":null,"e":415,"s":401,"text":"Step by step:"},{"code":null,"e":491,"s":415,"text":"\nSelect C1 and type (=)\nRight click A1\nType (+)\nRight click A2\nPress enter\n"},{"code":null,"e":514,"s":491,"text":"Select C1 and type (=)"},{"code":null,"e":529,"s":514,"text":"Right click A1"},{"code":null,"e":538,"s":529,"text":"Type (+)"},{"code":null,"e":553,"s":538,"text":"Right click A2"},{"code":null,"e":565,"s":553,"text":"Press enter"},{"code":null,"e":633,"s":565,"text":"You got it! You have successfully calculated A1(2) + A2(4) = C1(6)."},{"code":null,"e":743,"s":633,"text":"Note: Using cells to make calculations is an important part of Excel and you will use this alot as you learn."},{"code":null,"e":880,"s":743,"text":"Lets change from addition to multiplication, by replacing the (+) with a (*). It should now be =A1*A2, press enter to see what happens. "},{"code":null,"e":913,"s":880,"text":"You got C1(8), right? Well done!"},{"code":null,"e":1016,"s":913,"text":"Excel is great in this way. It allows you to add values to cells and make you do calculations on them."},{"code":null,"e":1096,"s":1016,"text":"Now, try to change the multiplication (*) to subtraction (-) and dividing (/). "},{"code":null,"e":1176,"s":1096,"text":"Delete all values in the sheet after you have tried the different combinations."},{"code":null,"e":1283,"s":1176,"text":"Let's add new data for the next example, where we will help the Pokemon trainers to count their Pokeballs."},{"code":null,"e":1318,"s":1283,"text":"Type or copy the following values:"},{"code":null,"e":1338,"s":1318,"text":"The data explained:"},{"code":null,"e":1365,"s":1338,"text":"Column A: Pokemon Trainers"},{"code":null,"e":1391,"s":1365,"text":"Row 1: Types of Pokeballs"},{"code":null,"e":1453,"s":1391,"text":"Range B2:D4: Amount of Pokeballs, Great balls and Ultra balls"},{"code":null,"e":1669,"s":1453,"text":"Note: It is important to practice reading data to understand its context. In this example you should focus on the trainers and their Pokeballs, which have three different types: Pokeball, Great ball and Ultra ball.\n"},{"code":null,"e":1788,"s":1669,"text":"Let's help Iva to count her Pokeballs. You find Iva in A2(Iva). The values in row 2 B2(2), C2(3), D2(1) belong to her."},{"code":null,"e":1823,"s":1788,"text":"Count the Pokeballs, step by step:"},{"code":null,"e":1926,"s":1823,"text":"\nSelect cell E2 and type (=)\nRight click B2\nType (+)\nRight click C2\nType (+)\nRight click D2\nHit enter\n"},{"code":null,"e":1954,"s":1926,"text":"Select cell E2 and type (=)"},{"code":null,"e":1969,"s":1954,"text":"Right click B2"},{"code":null,"e":1978,"s":1969,"text":"Type (+)"},{"code":null,"e":1993,"s":1978,"text":"Right click C2"},{"code":null,"e":2002,"s":1993,"text":"Type (+)"},{"code":null,"e":2017,"s":2002,"text":"Right click D2"},{"code":null,"e":2027,"s":2017,"text":"Hit enter"},{"code":null,"e":2110,"s":2027,"text":"Did you get the value E2(6)? Good job! You have helped Iva to count her Pokeballs."},{"code":null,"e":2163,"s":2110,"text":"Now, let's help Liam and Adora with counting theirs."},{"code":null,"e":2316,"s":2163,"text":"Do you remember the fill function that we learned about earlier? It can be used to continue calculations sidewards, downwards and upwards. Let's try it!"},{"code":null,"e":2382,"s":2316,"text":"Lets use the fill function to continue the formula, step by step:"},{"code":null,"e":2405,"s":2382,"text":"\nSelect E2\nFill E2:E4\n"},{"code":null,"e":2415,"s":2405,"text":"Select E2"},{"code":null,"e":2426,"s":2415,"text":"Fill E2:E4"},{"code":null,"e":2602,"s":2426,"text":"That is cool, right? The fill function continued the calculation that you used for Iva and was able to understand that you wanted to count the cells in the next rows as well. "},{"code":null,"e":2684,"s":2602,"text":"Now we have counted the Pokeballs for all three; Iva(6), Liam(12) and Adora(15). "},{"code":null,"e":2748,"s":2684,"text":"Let's see how many Pokeballs Iva, Liam and Adora have in total."},{"code":null,"e":2782,"s":2748,"text":"The total is called SUM in Excel."},{"code":null,"e":2824,"s":2782,"text":"There are two ways to calculate the SUM. "},{"code":null,"e":2837,"s":2824,"text":"Adding cells"},{"code":null,"e":2850,"s":2837,"text":"SUM function"},{"code":null,"e":3013,"s":2850,"text":"Excel has many pre-made functions available for you to use. The SUM\nfunction is one of the most used ones. You will learn more about functions in a later chapter."},{"code":null,"e":3040,"s":3013,"text":"Let's try both approaches."},{"code":null,"e":3142,"s":3040,"text":"Note: You can navigate to the cells with your keyboard arrows instead of right clicking them. Try it!"},{"code":null,"e":3177,"s":3142,"text":"Sum by adding cells, step by step:"},{"code":null,"e":3276,"s":3177,"text":"\nSelect cell E5, and type =\nLeft click E2\nType (+)\nLeft click E3\nType (+)\nLeft click E4\nHit enter\n"},{"code":null,"e":3303,"s":3276,"text":"Select cell E5, and type ="},{"code":null,"e":3317,"s":3303,"text":"Left click E2"},{"code":null,"e":3326,"s":3317,"text":"Type (+)"},{"code":null,"e":3340,"s":3326,"text":"Left click E3"},{"code":null,"e":3349,"s":3340,"text":"Type (+)"},{"code":null,"e":3363,"s":3349,"text":"Left click E4"},{"code":null,"e":3373,"s":3363,"text":"Hit enter"},{"code":null,"e":3395,"s":3373,"text":"The result is E5(33)."},{"code":null,"e":3424,"s":3395,"text":"Let's try the SUM function. "},{"code":null,"e":3485,"s":3424,"text":"Remember to delete the values that you currently have in E5."},{"code":null,"e":3513,"s":3485,"text":"SUM function, step by step:"},{"code":null,"e":3596,"s":3513,"text":"\nType E5(=)\nWrite SUM\nDouble click SUM in the menu\nMark the range E2:E4\nHit enter\n"},{"code":null,"e":3607,"s":3596,"text":"Type E5(=)"},{"code":null,"e":3617,"s":3607,"text":"Write SUM"},{"code":null,"e":3646,"s":3617,"text":"Double click SUM in the menu"},{"code":null,"e":3667,"s":3646,"text":"Mark the range E2:E4"},{"code":null,"e":3677,"s":3667,"text":"Hit enter"},{"code":null,"e":3753,"s":3677,"text":"Great job! You have successfully calculated the SUM using the SUM function."},{"code":null,"e":3801,"s":3753,"text":"Iva, Liam and Adora have 33 Pokeballs in total."},{"code":null,"e":3855,"s":3801,"text":"Let's change a value to see what happens. Type B2(7):"},{"code":null,"e":4117,"s":3855,"text":"The value in cell B2 was changed from 2 to 7. Notice that the formulas are doing calculations when we change the value in the cells, and the SUM is updated from 33 to 38. It allows us to change values that are used by the formulas, and the calculations remain.\n"},{"code":null,"e":4413,"s":4117,"text":"Values used in formulas can be typed directly and by using cells. The formula updates the result if you change the value of cells, which is used in the formula. The fill function can be used to continue your formulas upwards, downwards and sidewards. Excel has pre-built functions, such as SUM. "},{"code":null,"e":4488,"s":4413,"text":"In the next chapter you will learn about relative and absolute references."},{"code":null,"e":4516,"s":4488,"text":"Complete the Excel formula:"},{"code":null,"e":4522,"s":4516,"text":"8+10\n"},{"code":null,"e":4541,"s":4522,"text":"Start the Exercise"},{"code":null,"e":4574,"s":4541,"text":"We just launchedW3Schools videos"},{"code":null,"e":4616,"s":4574,"text":"Get certifiedby completinga course today!"},{"code":null,"e":4723,"s":4616,"text":"If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"},{"code":null,"e":4742,"s":4723,"text":"help@w3schools.com"}],"string":"[\n {\n \"code\": null,\n \"e\": 155,\n \"s\": 0,\n \"text\": \"A formula in Excel is used to do mathematical calculations. Formulas always start with the equal sign (=) typed in the cell, followed by your calculation.\"\n },\n {\n \"code\": null,\n \"e\": 202,\n \"s\": 155,\n \"text\": \"Formulas can be used for calculations such as:\"\n },\n {\n \"code\": null,\n \"e\": 207,\n \"s\": 202,\n \"text\": \"=1+1\"\n },\n {\n \"code\": null,\n \"e\": 212,\n \"s\": 207,\n \"text\": \"=2*2\"\n },\n {\n \"code\": null,\n \"e\": 219,\n \"s\": 212,\n \"text\": \"=4/2=2\"\n },\n {\n \"code\": null,\n \"e\": 282,\n \"s\": 219,\n \"text\": \"It can also be used to calculate values using cells as input. \"\n },\n {\n \"code\": null,\n \"e\": 315,\n \"s\": 282,\n \"text\": \"Let's have a look at an example.\"\n },\n {\n \"code\": null,\n \"e\": 350,\n \"s\": 315,\n \"text\": \"Type or copy the following values:\"\n },\n {\n \"code\": null,\n \"e\": 401,\n \"s\": 350,\n \"text\": \"Now we want to do a calculation with those values.\"\n },\n {\n \"code\": null,\n \"e\": 415,\n \"s\": 401,\n \"text\": \"Step by step:\"\n },\n {\n \"code\": null,\n \"e\": 491,\n \"s\": 415,\n \"text\": \"\\nSelect C1 and type (=)\\nRight click A1\\nType (+)\\nRight click A2\\nPress enter\\n\"\n },\n {\n \"code\": null,\n \"e\": 514,\n \"s\": 491,\n \"text\": \"Select C1 and type (=)\"\n },\n {\n \"code\": null,\n \"e\": 529,\n \"s\": 514,\n \"text\": \"Right click A1\"\n },\n {\n \"code\": null,\n \"e\": 538,\n \"s\": 529,\n \"text\": \"Type (+)\"\n },\n {\n \"code\": null,\n \"e\": 553,\n \"s\": 538,\n \"text\": \"Right click A2\"\n },\n {\n \"code\": null,\n \"e\": 565,\n \"s\": 553,\n \"text\": \"Press enter\"\n },\n {\n \"code\": null,\n \"e\": 633,\n \"s\": 565,\n \"text\": \"You got it! You have successfully calculated A1(2) + A2(4) = C1(6).\"\n },\n {\n \"code\": null,\n \"e\": 743,\n \"s\": 633,\n \"text\": \"Note: Using cells to make calculations is an important part of Excel and you will use this alot as you learn.\"\n },\n {\n \"code\": null,\n \"e\": 880,\n \"s\": 743,\n \"text\": \"Lets change from addition to multiplication, by replacing the (+) with a (*). It should now be =A1*A2, press enter to see what happens. \"\n },\n {\n \"code\": null,\n \"e\": 913,\n \"s\": 880,\n \"text\": \"You got C1(8), right? Well done!\"\n },\n {\n \"code\": null,\n \"e\": 1016,\n \"s\": 913,\n \"text\": \"Excel is great in this way. It allows you to add values to cells and make you do calculations on them.\"\n },\n {\n \"code\": null,\n \"e\": 1096,\n \"s\": 1016,\n \"text\": \"Now, try to change the multiplication (*) to subtraction (-) and dividing (/). \"\n },\n {\n \"code\": null,\n \"e\": 1176,\n \"s\": 1096,\n \"text\": \"Delete all values in the sheet after you have tried the different combinations.\"\n },\n {\n \"code\": null,\n \"e\": 1283,\n \"s\": 1176,\n \"text\": \"Let's add new data for the next example, where we will help the Pokemon trainers to count their Pokeballs.\"\n },\n {\n \"code\": null,\n \"e\": 1318,\n \"s\": 1283,\n \"text\": \"Type or copy the following values:\"\n },\n {\n \"code\": null,\n \"e\": 1338,\n \"s\": 1318,\n \"text\": \"The data explained:\"\n },\n {\n \"code\": null,\n \"e\": 1365,\n \"s\": 1338,\n \"text\": \"Column A: Pokemon Trainers\"\n },\n {\n \"code\": null,\n \"e\": 1391,\n \"s\": 1365,\n \"text\": \"Row 1: Types of Pokeballs\"\n },\n {\n \"code\": null,\n \"e\": 1453,\n \"s\": 1391,\n \"text\": \"Range B2:D4: Amount of Pokeballs, Great balls and Ultra balls\"\n },\n {\n \"code\": null,\n \"e\": 1669,\n \"s\": 1453,\n \"text\": \"Note: It is important to practice reading data to understand its context. In this example you should focus on the trainers and their Pokeballs, which have three different types: Pokeball, Great ball and Ultra ball.\\n\"\n },\n {\n \"code\": null,\n \"e\": 1788,\n \"s\": 1669,\n \"text\": \"Let's help Iva to count her Pokeballs. You find Iva in A2(Iva). The values in row 2 B2(2), C2(3), D2(1) belong to her.\"\n },\n {\n \"code\": null,\n \"e\": 1823,\n \"s\": 1788,\n \"text\": \"Count the Pokeballs, step by step:\"\n },\n {\n \"code\": null,\n \"e\": 1926,\n \"s\": 1823,\n \"text\": \"\\nSelect cell E2 and type (=)\\nRight click B2\\nType (+)\\nRight click C2\\nType (+)\\nRight click D2\\nHit enter\\n\"\n },\n {\n \"code\": null,\n \"e\": 1954,\n \"s\": 1926,\n \"text\": \"Select cell E2 and type (=)\"\n },\n {\n \"code\": null,\n \"e\": 1969,\n \"s\": 1954,\n \"text\": \"Right click B2\"\n },\n {\n \"code\": null,\n \"e\": 1978,\n \"s\": 1969,\n \"text\": \"Type (+)\"\n },\n {\n \"code\": null,\n \"e\": 1993,\n \"s\": 1978,\n \"text\": \"Right click C2\"\n },\n {\n \"code\": null,\n \"e\": 2002,\n \"s\": 1993,\n \"text\": \"Type (+)\"\n },\n {\n \"code\": null,\n \"e\": 2017,\n \"s\": 2002,\n \"text\": \"Right click D2\"\n },\n {\n \"code\": null,\n \"e\": 2027,\n \"s\": 2017,\n \"text\": \"Hit enter\"\n },\n {\n \"code\": null,\n \"e\": 2110,\n \"s\": 2027,\n \"text\": \"Did you get the value E2(6)? Good job! You have helped Iva to count her Pokeballs.\"\n },\n {\n \"code\": null,\n \"e\": 2163,\n \"s\": 2110,\n \"text\": \"Now, let's help Liam and Adora with counting theirs.\"\n },\n {\n \"code\": null,\n \"e\": 2316,\n \"s\": 2163,\n \"text\": \"Do you remember the fill function that we learned about earlier? It can be used to continue calculations sidewards, downwards and upwards. Let's try it!\"\n },\n {\n \"code\": null,\n \"e\": 2382,\n \"s\": 2316,\n \"text\": \"Lets use the fill function to continue the formula, step by step:\"\n },\n {\n \"code\": null,\n \"e\": 2405,\n \"s\": 2382,\n \"text\": \"\\nSelect E2\\nFill E2:E4\\n\"\n },\n {\n \"code\": null,\n \"e\": 2415,\n \"s\": 2405,\n \"text\": \"Select E2\"\n },\n {\n \"code\": null,\n \"e\": 2426,\n \"s\": 2415,\n \"text\": \"Fill E2:E4\"\n },\n {\n \"code\": null,\n \"e\": 2602,\n \"s\": 2426,\n \"text\": \"That is cool, right? The fill function continued the calculation that you used for Iva and was able to understand that you wanted to count the cells in the next rows as well. \"\n },\n {\n \"code\": null,\n \"e\": 2684,\n \"s\": 2602,\n \"text\": \"Now we have counted the Pokeballs for all three; Iva(6), Liam(12) and Adora(15). \"\n },\n {\n \"code\": null,\n \"e\": 2748,\n \"s\": 2684,\n \"text\": \"Let's see how many Pokeballs Iva, Liam and Adora have in total.\"\n },\n {\n \"code\": null,\n \"e\": 2782,\n \"s\": 2748,\n \"text\": \"The total is called SUM in Excel.\"\n },\n {\n \"code\": null,\n \"e\": 2824,\n \"s\": 2782,\n \"text\": \"There are two ways to calculate the SUM. \"\n },\n {\n \"code\": null,\n \"e\": 2837,\n \"s\": 2824,\n \"text\": \"Adding cells\"\n },\n {\n \"code\": null,\n \"e\": 2850,\n \"s\": 2837,\n \"text\": \"SUM function\"\n },\n {\n \"code\": null,\n \"e\": 3013,\n \"s\": 2850,\n \"text\": \"Excel has many pre-made functions available for you to use. The SUM\\nfunction is one of the most used ones. You will learn more about functions in a later chapter.\"\n },\n {\n \"code\": null,\n \"e\": 3040,\n \"s\": 3013,\n \"text\": \"Let's try both approaches.\"\n },\n {\n \"code\": null,\n \"e\": 3142,\n \"s\": 3040,\n \"text\": \"Note: You can navigate to the cells with your keyboard arrows instead of right clicking them. Try it!\"\n },\n {\n \"code\": null,\n \"e\": 3177,\n \"s\": 3142,\n \"text\": \"Sum by adding cells, step by step:\"\n },\n {\n \"code\": null,\n \"e\": 3276,\n \"s\": 3177,\n \"text\": \"\\nSelect cell E5, and type =\\nLeft click E2\\nType (+)\\nLeft click E3\\nType (+)\\nLeft click E4\\nHit enter\\n\"\n },\n {\n \"code\": null,\n \"e\": 3303,\n \"s\": 3276,\n \"text\": \"Select cell E5, and type =\"\n },\n {\n \"code\": null,\n \"e\": 3317,\n \"s\": 3303,\n \"text\": \"Left click E2\"\n },\n {\n \"code\": null,\n \"e\": 3326,\n \"s\": 3317,\n \"text\": \"Type (+)\"\n },\n {\n \"code\": null,\n \"e\": 3340,\n \"s\": 3326,\n \"text\": \"Left click E3\"\n },\n {\n \"code\": null,\n \"e\": 3349,\n \"s\": 3340,\n \"text\": \"Type (+)\"\n },\n {\n \"code\": null,\n \"e\": 3363,\n \"s\": 3349,\n \"text\": \"Left click E4\"\n },\n {\n \"code\": null,\n \"e\": 3373,\n \"s\": 3363,\n \"text\": \"Hit enter\"\n },\n {\n \"code\": null,\n \"e\": 3395,\n \"s\": 3373,\n \"text\": \"The result is E5(33).\"\n },\n {\n \"code\": null,\n \"e\": 3424,\n \"s\": 3395,\n \"text\": \"Let's try the SUM function. \"\n },\n {\n \"code\": null,\n \"e\": 3485,\n \"s\": 3424,\n \"text\": \"Remember to delete the values that you currently have in E5.\"\n },\n {\n \"code\": null,\n \"e\": 3513,\n \"s\": 3485,\n \"text\": \"SUM function, step by step:\"\n },\n {\n \"code\": null,\n \"e\": 3596,\n \"s\": 3513,\n \"text\": \"\\nType E5(=)\\nWrite SUM\\nDouble click SUM in the menu\\nMark the range E2:E4\\nHit enter\\n\"\n },\n {\n \"code\": null,\n \"e\": 3607,\n \"s\": 3596,\n \"text\": \"Type E5(=)\"\n },\n {\n \"code\": null,\n \"e\": 3617,\n \"s\": 3607,\n \"text\": \"Write SUM\"\n },\n {\n \"code\": null,\n \"e\": 3646,\n \"s\": 3617,\n \"text\": \"Double click SUM in the menu\"\n },\n {\n \"code\": null,\n \"e\": 3667,\n \"s\": 3646,\n \"text\": \"Mark the range E2:E4\"\n },\n {\n \"code\": null,\n \"e\": 3677,\n \"s\": 3667,\n \"text\": \"Hit enter\"\n },\n {\n \"code\": null,\n \"e\": 3753,\n \"s\": 3677,\n \"text\": \"Great job! You have successfully calculated the SUM using the SUM function.\"\n },\n {\n \"code\": null,\n \"e\": 3801,\n \"s\": 3753,\n \"text\": \"Iva, Liam and Adora have 33 Pokeballs in total.\"\n },\n {\n \"code\": null,\n \"e\": 3855,\n \"s\": 3801,\n \"text\": \"Let's change a value to see what happens. Type B2(7):\"\n },\n {\n \"code\": null,\n \"e\": 4117,\n \"s\": 3855,\n \"text\": \"The value in cell B2 was changed from 2 to 7. Notice that the formulas are doing calculations when we change the value in the cells, and the SUM is updated from 33 to 38. It allows us to change values that are used by the formulas, and the calculations remain.\\n\"\n },\n {\n \"code\": null,\n \"e\": 4413,\n \"s\": 4117,\n \"text\": \"Values used in formulas can be typed directly and by using cells. The formula updates the result if you change the value of cells, which is used in the formula. The fill function can be used to continue your formulas upwards, downwards and sidewards. Excel has pre-built functions, such as SUM. \"\n },\n {\n \"code\": null,\n \"e\": 4488,\n \"s\": 4413,\n \"text\": \"In the next chapter you will learn about relative and absolute references.\"\n },\n {\n \"code\": null,\n \"e\": 4516,\n \"s\": 4488,\n \"text\": \"Complete the Excel formula:\"\n },\n {\n \"code\": null,\n \"e\": 4522,\n \"s\": 4516,\n \"text\": \"8+10\\n\"\n },\n {\n \"code\": null,\n \"e\": 4541,\n \"s\": 4522,\n \"text\": \"Start the Exercise\"\n },\n {\n \"code\": null,\n \"e\": 4574,\n \"s\": 4541,\n \"text\": \"We just launchedW3Schools videos\"\n },\n {\n \"code\": null,\n \"e\": 4616,\n \"s\": 4574,\n \"text\": \"Get certifiedby completinga course today!\"\n },\n {\n \"code\": null,\n \"e\": 4723,\n \"s\": 4616,\n \"text\": \"If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:\"\n },\n {\n \"code\": null,\n \"e\": 4742,\n \"s\": 4723,\n \"text\": \"help@w3schools.com\"\n }\n]"}}},{"rowIdx":580,"cells":{"title":{"kind":"string","value":"Java String startsWith() method example."},"text":{"kind":"string","value":"The startsWith(String prefix, int toffset) method of the String class tests if the substring of this string beginning at the specified index starts with the specified prefix.\nimport java.lang.*;\n\npublic class StringDemo {\n public static void main(String[] args) {\n String str = \"www.tutorialspoint.com\";\n System.out.println(str);\n\n //The start string to be checked\n String startstr1 = \"tutorialspoint\";\n String startstr2 = \"tutorialspoint\";\n\n //Checks that string starts with given substring and starting index\n boolean retval1 = str.startsWith(startstr1);\n boolean retval2 = str.startsWith(startstr2, 4);\n\n //Prints true if the string starts with given substring\n System.out.println(\"starts with \" + startstr1 + \" ? \" + retval1);\n System.out.println(\"string \" + startstr2 + \" starting from index 4 ? \" + retval2);\n }\n}\nwww.tutorialspoint.com\nstarts with tutorialspoint ? false\nstring tutorialspoint starting from index 4 ? true"},"parsed":{"kind":"list like","value":[{"code":null,"e":1237,"s":1062,"text":"The startsWith(String prefix, int toffset) method of the String class tests if the substring of this string beginning at the specified index starts with the specified prefix."},{"code":null,"e":1941,"s":1237,"text":"import java.lang.*;\n\npublic class StringDemo {\n public static void main(String[] args) {\n String str = \"www.tutorialspoint.com\";\n System.out.println(str);\n\n //The start string to be checked\n String startstr1 = \"tutorialspoint\";\n String startstr2 = \"tutorialspoint\";\n\n //Checks that string starts with given substring and starting index\n boolean retval1 = str.startsWith(startstr1);\n boolean retval2 = str.startsWith(startstr2, 4);\n\n //Prints true if the string starts with given substring\n System.out.println(\"starts with \" + startstr1 + \" ? \" + retval1);\n System.out.println(\"string \" + startstr2 + \" starting from index 4 ? \" + retval2);\n }\n}"},{"code":null,"e":2051,"s":1941,"text":"www.tutorialspoint.com\nstarts with tutorialspoint ? false\nstring tutorialspoint starting from index 4 ? true\n"}],"string":"[\n {\n \"code\": null,\n \"e\": 1237,\n \"s\": 1062,\n \"text\": \"The startsWith(String prefix, int toffset) method of the String class tests if the substring of this string beginning at the specified index starts with the specified prefix.\"\n },\n {\n \"code\": null,\n \"e\": 1941,\n \"s\": 1237,\n \"text\": \"import java.lang.*;\\n\\npublic class StringDemo {\\n public static void main(String[] args) {\\n String str = \\\"www.tutorialspoint.com\\\";\\n System.out.println(str);\\n\\n //The start string to be checked\\n String startstr1 = \\\"tutorialspoint\\\";\\n String startstr2 = \\\"tutorialspoint\\\";\\n\\n //Checks that string starts with given substring and starting index\\n boolean retval1 = str.startsWith(startstr1);\\n boolean retval2 = str.startsWith(startstr2, 4);\\n\\n //Prints true if the string starts with given substring\\n System.out.println(\\\"starts with \\\" + startstr1 + \\\" ? \\\" + retval1);\\n System.out.println(\\\"string \\\" + startstr2 + \\\" starting from index 4 ? \\\" + retval2);\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2051,\n \"s\": 1941,\n \"text\": \"www.tutorialspoint.com\\nstarts with tutorialspoint ? false\\nstring tutorialspoint starting from index 4 ? true\\n\"\n }\n]"}}},{"rowIdx":581,"cells":{"title":{"kind":"string","value":"Automatic Email Notifications for HiveQL/SQL Script Completion | by Andrew Young | Towards Data Science"},"text":{"kind":"string","value":"If you find this article helpful in any way, please comment or click the applause button at the left to give me free virtual encouragement!\nThis bash script is very useful for executing HiveQL files (.hql)or SQL files (.sql). For instance, rather than having to periodically check when a CREATE TABLE or JOINhas finished, this script provides email notifications as well as a record for the clock time needed for the Hive/SQL operation(s) involved. This is especially useful for completing a series of Hive/SQL JOINs overnight for analysis the next day.\nA s a practicing data scientist, I dislike overexplained fluff pieces when I’m just looking for code to recycle and modify for my purposes so please find the script below. If you would like more explanation of the script beyond the code comments, please see the appendix at the end. A sufficient explanation of bash , its transformations at the bit-level, HiveQL/SQL and database engines is beyond the scope of this article. As it is written, the script below will only work for HiveQL when copy-pasted. You will have to make slight modifications for it to work with your SQL distribution. For example, changing hive -f to mysql source if you are using MySQL. It is also possible to have the script send emails to more than one email address.\n#!/bin/bash## By Andrew Young ## Contact: andrew.wong@team.neustar## Last modified date: 13 Dec 2019################## DIRECTIONS #################### Run the following without the \"<\" or \">\".## to make the script executable. (where scriptname is the name of your script)## To run this script, use the following command:## ./.sh################################### Ask user for Hive file name ###################################echo Hello. Please enter the HiveQL filename with the file extension. For example, 'script.hql'. To cancel, press 'Control+C'. For your convenience, here is a list of files in the present directory\\:ls -lread -p 'Enter the HiveQL filename: ' HIVE_FILE_NAME#read HIVE_FILE_NAMEecho You specified: $HIVE_FILE_NAMEecho Executing...######################## Define variables ########################start=\"$(date)\"starttime=$(date +%s)####################### Run Hive script #######################hive -f \"$HIVE_FILE_NAME\"################################### Human readable elapsed time ###################################secs_to_human() { if [[ -z ${1} || ${1} -lt 60 ]] ;then min=0 ; secs=\"${1}\" else time_mins=$(echo \"scale=2; ${1}/60\" | bc) min=$(echo ${time_mins} | cut -d'.' -f1) secs=\"0.$(echo ${time_mins} | cut -d'.' -f2)\" secs=$(echo ${secs}*60|bc|awk '{print int($1+0.5)}') fi timeelapsed=\"Clock Time Elapsed : ${min} minutes and ${secs} seconds.\"}################## Send email ##################end=\"$(date)\"endtime=$(date +%s)secs_to_human $(($(date +%s) - ${starttime}))subject=\"$HIVE_FILE_NAME started @ $start || finished @ $end\"## message=\"Note that $start and $end use server time. \\n $timeelapsed\"## working version:mail -s \"$subject\" youremail@wherever.com <<< \"$(printf \"Server start time: $start \\nServer end time: $end \\n$timeelapsed\")\"################### FUTURE WORK #################### 1. Add a diagnostic report. The report will calculate:# a. number of rows,# b. number of distinct observations for each column,# c. an option for a cutoff threshold for observations that ensures the diagnostic report is only produced for tables of a certain size. this can help prevent computationally expensive diagnostics for a large table## 2. Option to save output to a text file for inclusion in email notification.#################\nH ere I will explain the code for each section of the script.\n################## DIRECTIONS #################### Run the following without the \"<\" or \">\".## to make the script executable. (where scriptname is the name of your script)## To run this script, use the following command:## ./.sh\nchmod is a bash command that stands for “change mode.” I am using it to change the permissions of the file. It ensures that you, the owner, is allowed to execute your own file on the node/machine you are using.\n################################### Ask user for Hive file name ###################################echo Hello. Please enter the HiveQL filename with the file extension. For example, 'script.hql'. To cancel, press 'Control+C'. For your convenience, here is a list of files in the present directory\\:ls -lread -p 'Enter the HiveQL filename: ' HIVE_FILE_NAME#read HIVE_FILE_NAMEecho You specified: $HIVE_FILE_NAMEecho Executing...\nIn this section, I use the command ls -l to list all files in the same directory (i.e. “folder”) as the .sh file you saved my script to. The read -p is used to save the user input, i.e. the name of the .hql or .sql file you want to run, to a variable that is used later on in the script.\n######################## Define variables ########################start=\"$(date)\"starttime=$(date +%s)\nThis is where the system’s clock time is recorded. We save two versions to two different variables: startand starttime.\nstart=\"$(date)\" saves the system time and date. starttime=$(date +%s) saves the system time and date in number seconds offset from 1900. The first version is later used in the email to show the timestamp and date of when the HiveQL/SQL script began execution. The second is used to calculate the number of seconds and minutes that have elapsed. This provides a handy record for the amount of time it took to build the table(s) in the HiveQL/SQL script you supplied.\n################################### Human readable elapsed time ###################################secs_to_human() { if [[ -z ${1} || ${1} -lt 60 ]] ;then min=0 ; secs=\"${1}\" else time_mins=$(echo \"scale=2; ${1}/60\" | bc) min=$(echo ${time_mins} | cut -d'.' -f1) secs=\"0.$(echo ${time_mins} | cut -d'.' -f2)\" secs=$(echo ${secs}*60|bc|awk '{print int($1+0.5)}') fi timeelapsed=\"Clock Time Elapsed : ${min} minutes and ${secs} seconds.\"}\nThis is a complicated section to decode. Basically, it is a lot of code that does something very simple: finding the difference between start and end time, both in seconds, then converting that elapsed time in seconds to minutes and seconds. For example, if the job took 605 seconds, we transform that to 10 minutes 5 seconds and save it to a variable called timeelapsed to be used in the email we have sent to ourselves and/or various stakeholders.\n################## Send email ##################end=\"$(date)\"endtime=$(date +%s)secs_to_human $(($(date +%s) - ${starttime}))subject=\"$HIVE_FILE_NAME started @ $start || finished @ $end\"## message=\"Note that $start and $end use server time. \\n $timeelapsed\"## working version:mail -s \"$subject\" youremail@wherever.com <<< \"$(printf \"Server start time: $start \\nServer end time: $end \\n$timeelapsed\")\"\nIn this section, I again record system time in two different formats as two different variables. I actually only use one of these variables. I decided to keep the unused variable, endtime for potential future extensions to this script. The variable $end is used in the email notification to report the system time and date for when the HiveQL/SQL file completed.\nI define a variable subject that, unsurprisingly, becomes the subject line of the email. I commented out the message variable because I couldn’t get the printf command to correctly replace \\n with new lines in the message body of the email. I left it in place because I wanted to leave you with the option of editing it for your own purposes.\nmail is a program that sends emails. You will probably have it or something like it. There are other options to mail, like mailx, sendmail, smtp-cli, ssmtp, Swaks and a number of others. Some of these programs are subsets or derivatives of each other.\nI n the spirit of collaboration and furthering this effort, here are some ideas for future work:\nAllow a user to specify the directory of HiveQL/SQL files in which to look for the target file to be run. An example use-case is where a user might have organized their HiveQL/SQL code into multiple directories based on project/time.Improve the formatting of the email notification.Add robustness to the script by automatically detecting whether a HiveQL or SQL script was input, and if the latter, an option to specify the distribution (i.e. OracleDB, MySQL, etc.).Add the option for the user to specify whether the output of the query should be sent to the email(s) specified. Use-case: running a multitude of SELECT statements for exploratory data analysis (EDA). For example, counting number of distinct values of each field of a table, finding distinct levels of a categorical field, finding counts by group, numerical summary statistics like mean, median, standard deviation, etc.Add the option for the user to specify one or more email addresses at the command line.Add an option for the user to specify multiple .hql and/or .sql files to be run in sequence or perhaps in parallel.Add a parallelization option. Use-case: you have a tight deadline and need not concern yourself with resource utilization or co-worker etiquette. I need my results!Add an option to schedule code execution. Likely in the form of a sleep specified by the user. Use-case: you want to be courteous and avoid runs during production code runs and when cluster usage is high during the work day.Make one or more of the options above accessible via flags. This would be really cool, but also a lot of work just to add sprinkles on top.More.\nAllow a user to specify the directory of HiveQL/SQL files in which to look for the target file to be run. An example use-case is where a user might have organized their HiveQL/SQL code into multiple directories based on project/time.\nImprove the formatting of the email notification.\nAdd robustness to the script by automatically detecting whether a HiveQL or SQL script was input, and if the latter, an option to specify the distribution (i.e. OracleDB, MySQL, etc.).\nAdd the option for the user to specify whether the output of the query should be sent to the email(s) specified. Use-case: running a multitude of SELECT statements for exploratory data analysis (EDA). For example, counting number of distinct values of each field of a table, finding distinct levels of a categorical field, finding counts by group, numerical summary statistics like mean, median, standard deviation, etc.\nAdd the option for the user to specify one or more email addresses at the command line.\nAdd an option for the user to specify multiple .hql and/or .sql files to be run in sequence or perhaps in parallel.\nAdd a parallelization option. Use-case: you have a tight deadline and need not concern yourself with resource utilization or co-worker etiquette. I need my results!\nAdd an option to schedule code execution. Likely in the form of a sleep specified by the user. Use-case: you want to be courteous and avoid runs during production code runs and when cluster usage is high during the work day.\nMake one or more of the options above accessible via flags. This would be really cool, but also a lot of work just to add sprinkles on top.\nMore.\nOpen whatever command line terminal interface you prefer. Examples include Terminal on MacOS and Cygwin on Windows. MacOS has Terminal already installed. On Windows, you can download Cygwin here.Navigate to the directory with your .hql files. For example, type cd anyoung/hqlscripts/ to change your current working directory to that folder. This is the conceptual equivalent of double-clicking on a folder to open it, except here we are using a text-based command.nano .shnano is a command that opens a program of the same name and asks it to create a new file called .sh To revise this file, utilize the same command.Copy my script and paste it into this new .sh file. Change the recipient email address in my script from youremail@wherever.com to yours.bash .sh Bash uses files of the extension type .sh. This command asks the program, bash to run your “shell” file denoted with the .sh extension. Upon running, the script will output a list of all files in the same directory as the shell script. This list output is something I programmed for convenience.Marvel at your productivity gains!\nOpen whatever command line terminal interface you prefer. Examples include Terminal on MacOS and Cygwin on Windows. MacOS has Terminal already installed. On Windows, you can download Cygwin here.\nNavigate to the directory with your .hql files. For example, type cd anyoung/hqlscripts/ to change your current working directory to that folder. This is the conceptual equivalent of double-clicking on a folder to open it, except here we are using a text-based command.\nnano .shnano is a command that opens a program of the same name and asks it to create a new file called .sh To revise this file, utilize the same command.\nCopy my script and paste it into this new .sh file. Change the recipient email address in my script from youremail@wherever.com to yours.\nbash .sh Bash uses files of the extension type .sh. This command asks the program, bash to run your “shell” file denoted with the .sh extension. Upon running, the script will output a list of all files in the same directory as the shell script. This list output is something I programmed for convenience.\nMarvel at your productivity gains!\nAndrew Young is an R&D Data Scientist Manager at Neustar. For context, Neustar is an information services company that ingests structured and unstructured text and picture data from hundreds of companies in the domains of aviation, banking, government, marketing, social media and telecommunications to name several. Neustar combines these data ingredients then sells a finished dish with added value to enterprise clients for purposes like consulting, cyber security, fraud detection and marketing. In this context, Mr. Young is a hands-on lead architect on a small R&D data science team that builds the system feeding all products and services responsible for $1+ billion in annual revenue for Neustar.\nA pre-requisite to running the shell script above is to have a HiveQL or SQL script for it to run. Here is an example HiveQL script, example.hql:\nCREATE TABLE db.tb1 STORED AS ORC ASSELECT *FROM db.tb2WHERE a >= 5;CREATE TABLE db.tb3 STORED AS ORC ASSELECT *FROM db.tb4 aINNER JOIN db.tb5 bON a.col2 = b.col5WHERE dt >= 20191201 AND DOB != 01-01-1900;\nNote that you can have one or more CREATE TABLE commands in a single HiveQL/SQL script. They will be executed in sequence. If you have access to a cluster of nodes, you can also benefit from parallelization of your queries. I may explain that in another article."},"parsed":{"kind":"list like","value":[{"code":null,"e":312,"s":172,"text":"If you find this article helpful in any way, please comment or click the applause button at the left to give me free virtual encouragement!"},{"code":null,"e":726,"s":312,"text":"This bash script is very useful for executing HiveQL files (.hql)or SQL files (.sql). For instance, rather than having to periodically check when a CREATE TABLE or JOINhas finished, this script provides email notifications as well as a record for the clock time needed for the Hive/SQL operation(s) involved. This is especially useful for completing a series of Hive/SQL JOINs overnight for analysis the next day."},{"code":null,"e":1469,"s":726,"text":"A s a practicing data scientist, I dislike overexplained fluff pieces when I’m just looking for code to recycle and modify for my purposes so please find the script below. If you would like more explanation of the script beyond the code comments, please see the appendix at the end. A sufficient explanation of bash , its transformations at the bit-level, HiveQL/SQL and database engines is beyond the scope of this article. As it is written, the script below will only work for HiveQL when copy-pasted. You will have to make slight modifications for it to work with your SQL distribution. For example, changing hive -f to mysql source if you are using MySQL. It is also possible to have the script send emails to more than one email address."},{"code":null,"e":3849,"s":1469,"text":"#!/bin/bash## By Andrew Young ## Contact: andrew.wong@team.neustar## Last modified date: 13 Dec 2019################## DIRECTIONS #################### Run the following without the \"<\" or \">\".## to make the script executable. (where scriptname is the name of your script)## To run this script, use the following command:## ./.sh################################### Ask user for Hive file name ###################################echo Hello. Please enter the HiveQL filename with the file extension. For example, 'script.hql'. To cancel, press 'Control+C'. For your convenience, here is a list of files in the present directory\\:ls -lread -p 'Enter the HiveQL filename: ' HIVE_FILE_NAME#read HIVE_FILE_NAMEecho You specified: $HIVE_FILE_NAMEecho Executing...######################## Define variables ########################start=\"$(date)\"starttime=$(date +%s)####################### Run Hive script #######################hive -f \"$HIVE_FILE_NAME\"################################### Human readable elapsed time ###################################secs_to_human() { if [[ -z ${1} || ${1} -lt 60 ]] ;then min=0 ; secs=\"${1}\" else time_mins=$(echo \"scale=2; ${1}/60\" | bc) min=$(echo ${time_mins} | cut -d'.' -f1) secs=\"0.$(echo ${time_mins} | cut -d'.' -f2)\" secs=$(echo ${secs}*60|bc|awk '{print int($1+0.5)}') fi timeelapsed=\"Clock Time Elapsed : ${min} minutes and ${secs} seconds.\"}################## Send email ##################end=\"$(date)\"endtime=$(date +%s)secs_to_human $(($(date +%s) - ${starttime}))subject=\"$HIVE_FILE_NAME started @ $start || finished @ $end\"## message=\"Note that $start and $end use server time. \\n $timeelapsed\"## working version:mail -s \"$subject\" youremail@wherever.com <<< \"$(printf \"Server start time: $start \\nServer end time: $end \\n$timeelapsed\")\"################### FUTURE WORK #################### 1. Add a diagnostic report. The report will calculate:# a. number of rows,# b. number of distinct observations for each column,# c. an option for a cutoff threshold for observations that ensures the diagnostic report is only produced for tables of a certain size. this can help prevent computationally expensive diagnostics for a large table## 2. Option to save output to a text file for inclusion in email notification.#################"},{"code":null,"e":3911,"s":3849,"text":"H ere I will explain the code for each section of the script."},{"code":null,"e":4188,"s":3911,"text":"################## DIRECTIONS #################### Run the following without the \"<\" or \">\".## to make the script executable. (where scriptname is the name of your script)## To run this script, use the following command:## ./.sh"},{"code":null,"e":4399,"s":4188,"text":"chmod is a bash command that stands for “change mode.” I am using it to change the permissions of the file. It ensures that you, the owner, is allowed to execute your own file on the node/machine you are using."},{"code":null,"e":4827,"s":4399,"text":"################################### Ask user for Hive file name ###################################echo Hello. Please enter the HiveQL filename with the file extension. For example, 'script.hql'. To cancel, press 'Control+C'. For your convenience, here is a list of files in the present directory\\:ls -lread -p 'Enter the HiveQL filename: ' HIVE_FILE_NAME#read HIVE_FILE_NAMEecho You specified: $HIVE_FILE_NAMEecho Executing..."},{"code":null,"e":5115,"s":4827,"text":"In this section, I use the command ls -l to list all files in the same directory (i.e. “folder”) as the .sh file you saved my script to. The read -p is used to save the user input, i.e. the name of the .hql or .sql file you want to run, to a variable that is used later on in the script."},{"code":null,"e":5218,"s":5115,"text":"######################## Define variables ########################start=\"$(date)\"starttime=$(date +%s)"},{"code":null,"e":5338,"s":5218,"text":"This is where the system’s clock time is recorded. We save two versions to two different variables: startand starttime."},{"code":null,"e":5804,"s":5338,"text":"start=\"$(date)\" saves the system time and date. starttime=$(date +%s) saves the system time and date in number seconds offset from 1900. The first version is later used in the email to show the timestamp and date of when the HiveQL/SQL script began execution. The second is used to calculate the number of seconds and minutes that have elapsed. This provides a handy record for the amount of time it took to build the table(s) in the HiveQL/SQL script you supplied."},{"code":null,"e":6288,"s":5804,"text":"################################### Human readable elapsed time ###################################secs_to_human() { if [[ -z ${1} || ${1} -lt 60 ]] ;then min=0 ; secs=\"${1}\" else time_mins=$(echo \"scale=2; ${1}/60\" | bc) min=$(echo ${time_mins} | cut -d'.' -f1) secs=\"0.$(echo ${time_mins} | cut -d'.' -f2)\" secs=$(echo ${secs}*60|bc|awk '{print int($1+0.5)}') fi timeelapsed=\"Clock Time Elapsed : ${min} minutes and ${secs} seconds.\"}"},{"code":null,"e":6738,"s":6288,"text":"This is a complicated section to decode. Basically, it is a lot of code that does something very simple: finding the difference between start and end time, both in seconds, then converting that elapsed time in seconds to minutes and seconds. For example, if the job took 605 seconds, we transform that to 10 minutes 5 seconds and save it to a variable called timeelapsed to be used in the email we have sent to ourselves and/or various stakeholders."},{"code":null,"e":7143,"s":6738,"text":"################## Send email ##################end=\"$(date)\"endtime=$(date +%s)secs_to_human $(($(date +%s) - ${starttime}))subject=\"$HIVE_FILE_NAME started @ $start || finished @ $end\"## message=\"Note that $start and $end use server time. \\n $timeelapsed\"## working version:mail -s \"$subject\" youremail@wherever.com <<< \"$(printf \"Server start time: $start \\nServer end time: $end \\n$timeelapsed\")\""},{"code":null,"e":7506,"s":7143,"text":"In this section, I again record system time in two different formats as two different variables. I actually only use one of these variables. I decided to keep the unused variable, endtime for potential future extensions to this script. The variable $end is used in the email notification to report the system time and date for when the HiveQL/SQL file completed."},{"code":null,"e":7849,"s":7506,"text":"I define a variable subject that, unsurprisingly, becomes the subject line of the email. I commented out the message variable because I couldn’t get the printf command to correctly replace \\n with new lines in the message body of the email. I left it in place because I wanted to leave you with the option of editing it for your own purposes."},{"code":null,"e":8101,"s":7849,"text":"mail is a program that sends emails. You will probably have it or something like it. There are other options to mail, like mailx, sendmail, smtp-cli, ssmtp, Swaks and a number of others. Some of these programs are subsets or derivatives of each other."},{"code":null,"e":8198,"s":8101,"text":"I n the spirit of collaboration and furthering this effort, here are some ideas for future work:"},{"code":null,"e":9819,"s":8198,"text":"Allow a user to specify the directory of HiveQL/SQL files in which to look for the target file to be run. An example use-case is where a user might have organized their HiveQL/SQL code into multiple directories based on project/time.Improve the formatting of the email notification.Add robustness to the script by automatically detecting whether a HiveQL or SQL script was input, and if the latter, an option to specify the distribution (i.e. OracleDB, MySQL, etc.).Add the option for the user to specify whether the output of the query should be sent to the email(s) specified. Use-case: running a multitude of SELECT statements for exploratory data analysis (EDA). For example, counting number of distinct values of each field of a table, finding distinct levels of a categorical field, finding counts by group, numerical summary statistics like mean, median, standard deviation, etc.Add the option for the user to specify one or more email addresses at the command line.Add an option for the user to specify multiple .hql and/or .sql files to be run in sequence or perhaps in parallel.Add a parallelization option. Use-case: you have a tight deadline and need not concern yourself with resource utilization or co-worker etiquette. I need my results!Add an option to schedule code execution. Likely in the form of a sleep specified by the user. Use-case: you want to be courteous and avoid runs during production code runs and when cluster usage is high during the work day.Make one or more of the options above accessible via flags. This would be really cool, but also a lot of work just to add sprinkles on top.More."},{"code":null,"e":10053,"s":9819,"text":"Allow a user to specify the directory of HiveQL/SQL files in which to look for the target file to be run. An example use-case is where a user might have organized their HiveQL/SQL code into multiple directories based on project/time."},{"code":null,"e":10103,"s":10053,"text":"Improve the formatting of the email notification."},{"code":null,"e":10288,"s":10103,"text":"Add robustness to the script by automatically detecting whether a HiveQL or SQL script was input, and if the latter, an option to specify the distribution (i.e. OracleDB, MySQL, etc.)."},{"code":null,"e":10709,"s":10288,"text":"Add the option for the user to specify whether the output of the query should be sent to the email(s) specified. Use-case: running a multitude of SELECT statements for exploratory data analysis (EDA). For example, counting number of distinct values of each field of a table, finding distinct levels of a categorical field, finding counts by group, numerical summary statistics like mean, median, standard deviation, etc."},{"code":null,"e":10797,"s":10709,"text":"Add the option for the user to specify one or more email addresses at the command line."},{"code":null,"e":10913,"s":10797,"text":"Add an option for the user to specify multiple .hql and/or .sql files to be run in sequence or perhaps in parallel."},{"code":null,"e":11078,"s":10913,"text":"Add a parallelization option. Use-case: you have a tight deadline and need not concern yourself with resource utilization or co-worker etiquette. I need my results!"},{"code":null,"e":11303,"s":11078,"text":"Add an option to schedule code execution. Likely in the form of a sleep specified by the user. Use-case: you want to be courteous and avoid runs during production code runs and when cluster usage is high during the work day."},{"code":null,"e":11443,"s":11303,"text":"Make one or more of the options above accessible via flags. This would be really cool, but also a lot of work just to add sprinkles on top."},{"code":null,"e":11449,"s":11443,"text":"More."},{"code":null,"e":12597,"s":11449,"text":"Open whatever command line terminal interface you prefer. Examples include Terminal on MacOS and Cygwin on Windows. MacOS has Terminal already installed. On Windows, you can download Cygwin here.Navigate to the directory with your .hql files. For example, type cd anyoung/hqlscripts/ to change your current working directory to that folder. This is the conceptual equivalent of double-clicking on a folder to open it, except here we are using a text-based command.nano .shnano is a command that opens a program of the same name and asks it to create a new file called .sh To revise this file, utilize the same command.Copy my script and paste it into this new .sh file. Change the recipient email address in my script from youremail@wherever.com to yours.bash .sh Bash uses files of the extension type .sh. This command asks the program, bash to run your “shell” file denoted with the .sh extension. Upon running, the script will output a list of all files in the same directory as the shell script. This list output is something I programmed for convenience.Marvel at your productivity gains!"},{"code":null,"e":12793,"s":12597,"text":"Open whatever command line terminal interface you prefer. Examples include Terminal on MacOS and Cygwin on Windows. MacOS has Terminal already installed. On Windows, you can download Cygwin here."},{"code":null,"e":13063,"s":12793,"text":"Navigate to the directory with your .hql files. For example, type cd anyoung/hqlscripts/ to change your current working directory to that folder. This is the conceptual equivalent of double-clicking on a folder to open it, except here we are using a text-based command."},{"code":null,"e":13254,"s":13063,"text":"nano .shnano is a command that opens a program of the same name and asks it to create a new file called .sh To revise this file, utilize the same command."},{"code":null,"e":13392,"s":13254,"text":"Copy my script and paste it into this new .sh file. Change the recipient email address in my script from youremail@wherever.com to yours."},{"code":null,"e":13715,"s":13392,"text":"bash .sh Bash uses files of the extension type .sh. This command asks the program, bash to run your “shell” file denoted with the .sh extension. Upon running, the script will output a list of all files in the same directory as the shell script. This list output is something I programmed for convenience."},{"code":null,"e":13750,"s":13715,"text":"Marvel at your productivity gains!"},{"code":null,"e":14455,"s":13750,"text":"Andrew Young is an R&D Data Scientist Manager at Neustar. For context, Neustar is an information services company that ingests structured and unstructured text and picture data from hundreds of companies in the domains of aviation, banking, government, marketing, social media and telecommunications to name several. Neustar combines these data ingredients then sells a finished dish with added value to enterprise clients for purposes like consulting, cyber security, fraud detection and marketing. In this context, Mr. Young is a hands-on lead architect on a small R&D data science team that builds the system feeding all products and services responsible for $1+ billion in annual revenue for Neustar."},{"code":null,"e":14601,"s":14455,"text":"A pre-requisite to running the shell script above is to have a HiveQL or SQL script for it to run. Here is an example HiveQL script, example.hql:"},{"code":null,"e":14807,"s":14601,"text":"CREATE TABLE db.tb1 STORED AS ORC ASSELECT *FROM db.tb2WHERE a >= 5;CREATE TABLE db.tb3 STORED AS ORC ASSELECT *FROM db.tb4 aINNER JOIN db.tb5 bON a.col2 = b.col5WHERE dt >= 20191201 AND DOB != 01-01-1900;"}],"string":"[\n {\n \"code\": null,\n \"e\": 312,\n \"s\": 172,\n \"text\": \"If you find this article helpful in any way, please comment or click the applause button at the left to give me free virtual encouragement!\"\n },\n {\n \"code\": null,\n \"e\": 726,\n \"s\": 312,\n \"text\": \"This bash script is very useful for executing HiveQL files (.hql)or SQL files (.sql). For instance, rather than having to periodically check when a CREATE TABLE or JOINhas finished, this script provides email notifications as well as a record for the clock time needed for the Hive/SQL operation(s) involved. This is especially useful for completing a series of Hive/SQL JOINs overnight for analysis the next day.\"\n },\n {\n \"code\": null,\n \"e\": 1469,\n \"s\": 726,\n \"text\": \"A s a practicing data scientist, I dislike overexplained fluff pieces when I’m just looking for code to recycle and modify for my purposes so please find the script below. If you would like more explanation of the script beyond the code comments, please see the appendix at the end. A sufficient explanation of bash , its transformations at the bit-level, HiveQL/SQL and database engines is beyond the scope of this article. As it is written, the script below will only work for HiveQL when copy-pasted. You will have to make slight modifications for it to work with your SQL distribution. For example, changing hive -f to mysql source if you are using MySQL. It is also possible to have the script send emails to more than one email address.\"\n },\n {\n \"code\": null,\n \"e\": 3849,\n \"s\": 1469,\n \"text\": \"#!/bin/bash## By Andrew Young ## Contact: andrew.wong@team.neustar## Last modified date: 13 Dec 2019################## DIRECTIONS #################### Run the following without the \\\"<\\\" or \\\">\\\".## to make the script executable. (where scriptname is the name of your script)## To run this script, use the following command:## ./.sh################################### Ask user for Hive file name ###################################echo Hello. Please enter the HiveQL filename with the file extension. For example, 'script.hql'. To cancel, press 'Control+C'. For your convenience, here is a list of files in the present directory\\\\:ls -lread -p 'Enter the HiveQL filename: ' HIVE_FILE_NAME#read HIVE_FILE_NAMEecho You specified: $HIVE_FILE_NAMEecho Executing...######################## Define variables ########################start=\\\"$(date)\\\"starttime=$(date +%s)####################### Run Hive script #######################hive -f \\\"$HIVE_FILE_NAME\\\"################################### Human readable elapsed time ###################################secs_to_human() { if [[ -z ${1} || ${1} -lt 60 ]] ;then min=0 ; secs=\\\"${1}\\\" else time_mins=$(echo \\\"scale=2; ${1}/60\\\" | bc) min=$(echo ${time_mins} | cut -d'.' -f1) secs=\\\"0.$(echo ${time_mins} | cut -d'.' -f2)\\\" secs=$(echo ${secs}*60|bc|awk '{print int($1+0.5)}') fi timeelapsed=\\\"Clock Time Elapsed : ${min} minutes and ${secs} seconds.\\\"}################## Send email ##################end=\\\"$(date)\\\"endtime=$(date +%s)secs_to_human $(($(date +%s) - ${starttime}))subject=\\\"$HIVE_FILE_NAME started @ $start || finished @ $end\\\"## message=\\\"Note that $start and $end use server time. \\\\n $timeelapsed\\\"## working version:mail -s \\\"$subject\\\" youremail@wherever.com <<< \\\"$(printf \\\"Server start time: $start \\\\nServer end time: $end \\\\n$timeelapsed\\\")\\\"################### FUTURE WORK #################### 1. Add a diagnostic report. The report will calculate:# a. number of rows,# b. number of distinct observations for each column,# c. an option for a cutoff threshold for observations that ensures the diagnostic report is only produced for tables of a certain size. this can help prevent computationally expensive diagnostics for a large table## 2. Option to save output to a text file for inclusion in email notification.#################\"\n },\n {\n \"code\": null,\n \"e\": 3911,\n \"s\": 3849,\n \"text\": \"H ere I will explain the code for each section of the script.\"\n },\n {\n \"code\": null,\n \"e\": 4188,\n \"s\": 3911,\n \"text\": \"################## DIRECTIONS #################### Run the following without the \\\"<\\\" or \\\">\\\".## to make the script executable. (where scriptname is the name of your script)## To run this script, use the following command:## ./.sh\"\n },\n {\n \"code\": null,\n \"e\": 4399,\n \"s\": 4188,\n \"text\": \"chmod is a bash command that stands for “change mode.” I am using it to change the permissions of the file. It ensures that you, the owner, is allowed to execute your own file on the node/machine you are using.\"\n },\n {\n \"code\": null,\n \"e\": 4827,\n \"s\": 4399,\n \"text\": \"################################### Ask user for Hive file name ###################################echo Hello. Please enter the HiveQL filename with the file extension. For example, 'script.hql'. To cancel, press 'Control+C'. For your convenience, here is a list of files in the present directory\\\\:ls -lread -p 'Enter the HiveQL filename: ' HIVE_FILE_NAME#read HIVE_FILE_NAMEecho You specified: $HIVE_FILE_NAMEecho Executing...\"\n },\n {\n \"code\": null,\n \"e\": 5115,\n \"s\": 4827,\n \"text\": \"In this section, I use the command ls -l to list all files in the same directory (i.e. “folder”) as the .sh file you saved my script to. The read -p is used to save the user input, i.e. the name of the .hql or .sql file you want to run, to a variable that is used later on in the script.\"\n },\n {\n \"code\": null,\n \"e\": 5218,\n \"s\": 5115,\n \"text\": \"######################## Define variables ########################start=\\\"$(date)\\\"starttime=$(date +%s)\"\n },\n {\n \"code\": null,\n \"e\": 5338,\n \"s\": 5218,\n \"text\": \"This is where the system’s clock time is recorded. We save two versions to two different variables: startand starttime.\"\n },\n {\n \"code\": null,\n \"e\": 5804,\n \"s\": 5338,\n \"text\": \"start=\\\"$(date)\\\" saves the system time and date. starttime=$(date +%s) saves the system time and date in number seconds offset from 1900. The first version is later used in the email to show the timestamp and date of when the HiveQL/SQL script began execution. The second is used to calculate the number of seconds and minutes that have elapsed. This provides a handy record for the amount of time it took to build the table(s) in the HiveQL/SQL script you supplied.\"\n },\n {\n \"code\": null,\n \"e\": 6288,\n \"s\": 5804,\n \"text\": \"################################### Human readable elapsed time ###################################secs_to_human() { if [[ -z ${1} || ${1} -lt 60 ]] ;then min=0 ; secs=\\\"${1}\\\" else time_mins=$(echo \\\"scale=2; ${1}/60\\\" | bc) min=$(echo ${time_mins} | cut -d'.' -f1) secs=\\\"0.$(echo ${time_mins} | cut -d'.' -f2)\\\" secs=$(echo ${secs}*60|bc|awk '{print int($1+0.5)}') fi timeelapsed=\\\"Clock Time Elapsed : ${min} minutes and ${secs} seconds.\\\"}\"\n },\n {\n \"code\": null,\n \"e\": 6738,\n \"s\": 6288,\n \"text\": \"This is a complicated section to decode. Basically, it is a lot of code that does something very simple: finding the difference between start and end time, both in seconds, then converting that elapsed time in seconds to minutes and seconds. For example, if the job took 605 seconds, we transform that to 10 minutes 5 seconds and save it to a variable called timeelapsed to be used in the email we have sent to ourselves and/or various stakeholders.\"\n },\n {\n \"code\": null,\n \"e\": 7143,\n \"s\": 6738,\n \"text\": \"################## Send email ##################end=\\\"$(date)\\\"endtime=$(date +%s)secs_to_human $(($(date +%s) - ${starttime}))subject=\\\"$HIVE_FILE_NAME started @ $start || finished @ $end\\\"## message=\\\"Note that $start and $end use server time. \\\\n $timeelapsed\\\"## working version:mail -s \\\"$subject\\\" youremail@wherever.com <<< \\\"$(printf \\\"Server start time: $start \\\\nServer end time: $end \\\\n$timeelapsed\\\")\\\"\"\n },\n {\n \"code\": null,\n \"e\": 7506,\n \"s\": 7143,\n \"text\": \"In this section, I again record system time in two different formats as two different variables. I actually only use one of these variables. I decided to keep the unused variable, endtime for potential future extensions to this script. The variable $end is used in the email notification to report the system time and date for when the HiveQL/SQL file completed.\"\n },\n {\n \"code\": null,\n \"e\": 7849,\n \"s\": 7506,\n \"text\": \"I define a variable subject that, unsurprisingly, becomes the subject line of the email. I commented out the message variable because I couldn’t get the printf command to correctly replace \\\\n with new lines in the message body of the email. I left it in place because I wanted to leave you with the option of editing it for your own purposes.\"\n },\n {\n \"code\": null,\n \"e\": 8101,\n \"s\": 7849,\n \"text\": \"mail is a program that sends emails. You will probably have it or something like it. There are other options to mail, like mailx, sendmail, smtp-cli, ssmtp, Swaks and a number of others. Some of these programs are subsets or derivatives of each other.\"\n },\n {\n \"code\": null,\n \"e\": 8198,\n \"s\": 8101,\n \"text\": \"I n the spirit of collaboration and furthering this effort, here are some ideas for future work:\"\n },\n {\n \"code\": null,\n \"e\": 9819,\n \"s\": 8198,\n \"text\": \"Allow a user to specify the directory of HiveQL/SQL files in which to look for the target file to be run. An example use-case is where a user might have organized their HiveQL/SQL code into multiple directories based on project/time.Improve the formatting of the email notification.Add robustness to the script by automatically detecting whether a HiveQL or SQL script was input, and if the latter, an option to specify the distribution (i.e. OracleDB, MySQL, etc.).Add the option for the user to specify whether the output of the query should be sent to the email(s) specified. Use-case: running a multitude of SELECT statements for exploratory data analysis (EDA). For example, counting number of distinct values of each field of a table, finding distinct levels of a categorical field, finding counts by group, numerical summary statistics like mean, median, standard deviation, etc.Add the option for the user to specify one or more email addresses at the command line.Add an option for the user to specify multiple .hql and/or .sql files to be run in sequence or perhaps in parallel.Add a parallelization option. Use-case: you have a tight deadline and need not concern yourself with resource utilization or co-worker etiquette. I need my results!Add an option to schedule code execution. Likely in the form of a sleep specified by the user. Use-case: you want to be courteous and avoid runs during production code runs and when cluster usage is high during the work day.Make one or more of the options above accessible via flags. This would be really cool, but also a lot of work just to add sprinkles on top.More.\"\n },\n {\n \"code\": null,\n \"e\": 10053,\n \"s\": 9819,\n \"text\": \"Allow a user to specify the directory of HiveQL/SQL files in which to look for the target file to be run. An example use-case is where a user might have organized their HiveQL/SQL code into multiple directories based on project/time.\"\n },\n {\n \"code\": null,\n \"e\": 10103,\n \"s\": 10053,\n \"text\": \"Improve the formatting of the email notification.\"\n },\n {\n \"code\": null,\n \"e\": 10288,\n \"s\": 10103,\n \"text\": \"Add robustness to the script by automatically detecting whether a HiveQL or SQL script was input, and if the latter, an option to specify the distribution (i.e. OracleDB, MySQL, etc.).\"\n },\n {\n \"code\": null,\n \"e\": 10709,\n \"s\": 10288,\n \"text\": \"Add the option for the user to specify whether the output of the query should be sent to the email(s) specified. Use-case: running a multitude of SELECT statements for exploratory data analysis (EDA). For example, counting number of distinct values of each field of a table, finding distinct levels of a categorical field, finding counts by group, numerical summary statistics like mean, median, standard deviation, etc.\"\n },\n {\n \"code\": null,\n \"e\": 10797,\n \"s\": 10709,\n \"text\": \"Add the option for the user to specify one or more email addresses at the command line.\"\n },\n {\n \"code\": null,\n \"e\": 10913,\n \"s\": 10797,\n \"text\": \"Add an option for the user to specify multiple .hql and/or .sql files to be run in sequence or perhaps in parallel.\"\n },\n {\n \"code\": null,\n \"e\": 11078,\n \"s\": 10913,\n \"text\": \"Add a parallelization option. Use-case: you have a tight deadline and need not concern yourself with resource utilization or co-worker etiquette. I need my results!\"\n },\n {\n \"code\": null,\n \"e\": 11303,\n \"s\": 11078,\n \"text\": \"Add an option to schedule code execution. Likely in the form of a sleep specified by the user. Use-case: you want to be courteous and avoid runs during production code runs and when cluster usage is high during the work day.\"\n },\n {\n \"code\": null,\n \"e\": 11443,\n \"s\": 11303,\n \"text\": \"Make one or more of the options above accessible via flags. This would be really cool, but also a lot of work just to add sprinkles on top.\"\n },\n {\n \"code\": null,\n \"e\": 11449,\n \"s\": 11443,\n \"text\": \"More.\"\n },\n {\n \"code\": null,\n \"e\": 12597,\n \"s\": 11449,\n \"text\": \"Open whatever command line terminal interface you prefer. Examples include Terminal on MacOS and Cygwin on Windows. MacOS has Terminal already installed. On Windows, you can download Cygwin here.Navigate to the directory with your .hql files. For example, type cd anyoung/hqlscripts/ to change your current working directory to that folder. This is the conceptual equivalent of double-clicking on a folder to open it, except here we are using a text-based command.nano .shnano is a command that opens a program of the same name and asks it to create a new file called .sh To revise this file, utilize the same command.Copy my script and paste it into this new .sh file. Change the recipient email address in my script from youremail@wherever.com to yours.bash .sh Bash uses files of the extension type .sh. This command asks the program, bash to run your “shell” file denoted with the .sh extension. Upon running, the script will output a list of all files in the same directory as the shell script. This list output is something I programmed for convenience.Marvel at your productivity gains!\"\n },\n {\n \"code\": null,\n \"e\": 12793,\n \"s\": 12597,\n \"text\": \"Open whatever command line terminal interface you prefer. Examples include Terminal on MacOS and Cygwin on Windows. MacOS has Terminal already installed. On Windows, you can download Cygwin here.\"\n },\n {\n \"code\": null,\n \"e\": 13063,\n \"s\": 12793,\n \"text\": \"Navigate to the directory with your .hql files. For example, type cd anyoung/hqlscripts/ to change your current working directory to that folder. This is the conceptual equivalent of double-clicking on a folder to open it, except here we are using a text-based command.\"\n },\n {\n \"code\": null,\n \"e\": 13254,\n \"s\": 13063,\n \"text\": \"nano .shnano is a command that opens a program of the same name and asks it to create a new file called .sh To revise this file, utilize the same command.\"\n },\n {\n \"code\": null,\n \"e\": 13392,\n \"s\": 13254,\n \"text\": \"Copy my script and paste it into this new .sh file. Change the recipient email address in my script from youremail@wherever.com to yours.\"\n },\n {\n \"code\": null,\n \"e\": 13715,\n \"s\": 13392,\n \"text\": \"bash .sh Bash uses files of the extension type .sh. This command asks the program, bash to run your “shell” file denoted with the .sh extension. Upon running, the script will output a list of all files in the same directory as the shell script. This list output is something I programmed for convenience.\"\n },\n {\n \"code\": null,\n \"e\": 13750,\n \"s\": 13715,\n \"text\": \"Marvel at your productivity gains!\"\n },\n {\n \"code\": null,\n \"e\": 14455,\n \"s\": 13750,\n \"text\": \"Andrew Young is an R&D Data Scientist Manager at Neustar. For context, Neustar is an information services company that ingests structured and unstructured text and picture data from hundreds of companies in the domains of aviation, banking, government, marketing, social media and telecommunications to name several. Neustar combines these data ingredients then sells a finished dish with added value to enterprise clients for purposes like consulting, cyber security, fraud detection and marketing. In this context, Mr. Young is a hands-on lead architect on a small R&D data science team that builds the system feeding all products and services responsible for $1+ billion in annual revenue for Neustar.\"\n },\n {\n \"code\": null,\n \"e\": 14601,\n \"s\": 14455,\n \"text\": \"A pre-requisite to running the shell script above is to have a HiveQL or SQL script for it to run. Here is an example HiveQL script, example.hql:\"\n },\n {\n \"code\": null,\n \"e\": 14807,\n \"s\": 14601,\n \"text\": \"CREATE TABLE db.tb1 STORED AS ORC ASSELECT *FROM db.tb2WHERE a >= 5;CREATE TABLE db.tb3 STORED AS ORC ASSELECT *FROM db.tb4 aINNER JOIN db.tb5 bON a.col2 = b.col5WHERE dt >= 20191201 AND DOB != 01-01-1900;\"\n }\n]"}}},{"rowIdx":582,"cells":{"title":{"kind":"string","value":"How to update a MySQL date type column?"},"text":{"kind":"string","value":"Let us first create a table −\nmysql> create table DemoTable1451\n -> (\n -> JoiningDate date\n -> );\nQuery OK, 0 rows affected (0.52 sec)\nInsert some records in the table using insert command −\nmysql> insert into DemoTable1451 values('2019-07-21');\nQuery OK, 1 row affected (0.07 sec)\nmysql> insert into DemoTable1451 values('2018-01-31');\nQuery OK, 1 row affected (0.09 sec)\nmysql> insert into DemoTable1451 values('2017-06-01');\nQuery OK, 1 row affected (0.20 sec)\nDisplay all records from the table using select statement −\nmysql> select * from DemoTable1451;\nThis will produce the following output −\n+-------------+\n| JoiningDate |\n+-------------+\n| 2019-07-21 |\n| 2018-01-31 |\n| 2017-06-01 |\n+-------------+\n3 rows in set (0.00 sec)\nHere is the query to update a date type column. We are incrementing the date records with an year −\nmysql> update DemoTable1451 set JoiningDate=date_add(JoiningDate, interval 1 year);\nQuery OK, 3 rows affected (0.15 sec)\nRows matched: 3 Changed: 3 Warnings: 0\nLet us check the table records once again −\nmysql> select * from DemoTable1451;\nThis will produce the following output −\n+-------------+\n| JoiningDate |\n+-------------+\n| 2020-07-21 |\n| 2019-01-31 |\n| 2018-06-01 |\n+-------------+\n3 rows in set (0.00 sec)"},"parsed":{"kind":"list like","value":[{"code":null,"e":1092,"s":1062,"text":"Let us first create a table −"},{"code":null,"e":1203,"s":1092,"text":"mysql> create table DemoTable1451\n -> (\n -> JoiningDate date\n -> );\nQuery OK, 0 rows affected (0.52 sec)"},{"code":null,"e":1259,"s":1203,"text":"Insert some records in the table using insert command −"},{"code":null,"e":1532,"s":1259,"text":"mysql> insert into DemoTable1451 values('2019-07-21');\nQuery OK, 1 row affected (0.07 sec)\nmysql> insert into DemoTable1451 values('2018-01-31');\nQuery OK, 1 row affected (0.09 sec)\nmysql> insert into DemoTable1451 values('2017-06-01');\nQuery OK, 1 row affected (0.20 sec)"},{"code":null,"e":1592,"s":1532,"text":"Display all records from the table using select statement −"},{"code":null,"e":1628,"s":1592,"text":"mysql> select * from DemoTable1451;"},{"code":null,"e":1669,"s":1628,"text":"This will produce the following output −"},{"code":null,"e":1806,"s":1669,"text":"+-------------+\n| JoiningDate |\n+-------------+\n| 2019-07-21 |\n| 2018-01-31 |\n| 2017-06-01 |\n+-------------+\n3 rows in set (0.00 sec)"},{"code":null,"e":1906,"s":1806,"text":"Here is the query to update a date type column. We are incrementing the date records with an year −"},{"code":null,"e":2067,"s":1906,"text":"mysql> update DemoTable1451 set JoiningDate=date_add(JoiningDate, interval 1 year);\nQuery OK, 3 rows affected (0.15 sec)\nRows matched: 3 Changed: 3 Warnings: 0"},{"code":null,"e":2111,"s":2067,"text":"Let us check the table records once again −"},{"code":null,"e":2147,"s":2111,"text":"mysql> select * from DemoTable1451;"},{"code":null,"e":2188,"s":2147,"text":"This will produce the following output −"},{"code":null,"e":2325,"s":2188,"text":"+-------------+\n| JoiningDate |\n+-------------+\n| 2020-07-21 |\n| 2019-01-31 |\n| 2018-06-01 |\n+-------------+\n3 rows in set (0.00 sec)"}],"string":"[\n {\n \"code\": null,\n \"e\": 1092,\n \"s\": 1062,\n \"text\": \"Let us first create a table −\"\n },\n {\n \"code\": null,\n \"e\": 1203,\n \"s\": 1092,\n \"text\": \"mysql> create table DemoTable1451\\n -> (\\n -> JoiningDate date\\n -> );\\nQuery OK, 0 rows affected (0.52 sec)\"\n },\n {\n \"code\": null,\n \"e\": 1259,\n \"s\": 1203,\n \"text\": \"Insert some records in the table using insert command −\"\n },\n {\n \"code\": null,\n \"e\": 1532,\n \"s\": 1259,\n \"text\": \"mysql> insert into DemoTable1451 values('2019-07-21');\\nQuery OK, 1 row affected (0.07 sec)\\nmysql> insert into DemoTable1451 values('2018-01-31');\\nQuery OK, 1 row affected (0.09 sec)\\nmysql> insert into DemoTable1451 values('2017-06-01');\\nQuery OK, 1 row affected (0.20 sec)\"\n },\n {\n \"code\": null,\n \"e\": 1592,\n \"s\": 1532,\n \"text\": \"Display all records from the table using select statement −\"\n },\n {\n \"code\": null,\n \"e\": 1628,\n \"s\": 1592,\n \"text\": \"mysql> select * from DemoTable1451;\"\n },\n {\n \"code\": null,\n \"e\": 1669,\n \"s\": 1628,\n \"text\": \"This will produce the following output −\"\n },\n {\n \"code\": null,\n \"e\": 1806,\n \"s\": 1669,\n \"text\": \"+-------------+\\n| JoiningDate |\\n+-------------+\\n| 2019-07-21 |\\n| 2018-01-31 |\\n| 2017-06-01 |\\n+-------------+\\n3 rows in set (0.00 sec)\"\n },\n {\n \"code\": null,\n \"e\": 1906,\n \"s\": 1806,\n \"text\": \"Here is the query to update a date type column. We are incrementing the date records with an year −\"\n },\n {\n \"code\": null,\n \"e\": 2067,\n \"s\": 1906,\n \"text\": \"mysql> update DemoTable1451 set JoiningDate=date_add(JoiningDate, interval 1 year);\\nQuery OK, 3 rows affected (0.15 sec)\\nRows matched: 3 Changed: 3 Warnings: 0\"\n },\n {\n \"code\": null,\n \"e\": 2111,\n \"s\": 2067,\n \"text\": \"Let us check the table records once again −\"\n },\n {\n \"code\": null,\n \"e\": 2147,\n \"s\": 2111,\n \"text\": \"mysql> select * from DemoTable1451;\"\n },\n {\n \"code\": null,\n \"e\": 2188,\n \"s\": 2147,\n \"text\": \"This will produce the following output −\"\n },\n {\n \"code\": null,\n \"e\": 2325,\n \"s\": 2188,\n \"text\": \"+-------------+\\n| JoiningDate |\\n+-------------+\\n| 2020-07-21 |\\n| 2019-01-31 |\\n| 2018-06-01 |\\n+-------------+\\n3 rows in set (0.00 sec)\"\n }\n]"}}},{"rowIdx":583,"cells":{"title":{"kind":"string","value":"How to remove duplicates from MongoDB Collection?"},"text":{"kind":"string","value":"For this, set “unique:true” i.e. the unique constraint and avoid inserting duplicates as in the below syntax −\ndb.yourCollectionName.ensureIndex({yourFieldName: 1}, {unique: true, dropDups: true})\nTo understand the above syntax, let us create a collection with documents. Here, duplicate insertion won’t be allowed −\n> db.demo604.ensureIndex({FirstName: 1}, {unique: true, dropDups: true});{\n \"createdCollectionAutomatically\" : true,\n \"numIndexesBefore\" : 1,\n \"numIndexesAfter\" : 2,\n \"ok\" : 1\n}\n> db.demo604.insertOne({FirstName:\"Chris\"});{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e960887ed011c280a0905d8\")\n}\n> db.demo604.insertOne({FirstName:\"Bob\"});{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e96088aed011c280a0905d9\")\n}\n> db.demo604.insertOne({FirstName:\"David\"});{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e96088ded011c280a0905da\")\n}\n> db.demo604.insertOne({FirstName:\"Chris\"});\n2020-04-15T00:31:35.978+0530 E QUERY [js] WriteError: E11000 duplicate key error collection: test.demo604 index: FirstName_1 dup key: { : \"Chris\" } :\nWriteError({\n \"index\" : 0,\n \"code\" : 11000,\n \"errmsg\" : \"E11000 duplicate key error collection: test.demo604 index: FirstName_1 dup key: { : \\\"Chris\\\" }\",\n \"op\" : {\n \"_id\" : ObjectId(\"5e96088fed011c280a0905db\"),\n \"FirstName\" : \"Chris\"\n }\n})\nWriteError@src/mongo/shell/bulk_api.js:461:48\nBulk/mergeBatchResults@src/mongo/shell/bulk_api.js:841:49\nBulk/executeBatch@src/mongo/shell/bulk_api.js:906:13\nBulk/this.execute@src/mongo/shell/bulk_api.js:1150:21\nDBCollection.prototype.insertOne@src/mongo/shell/crud_api.js:252:9\n@(shell):1:1\nDisplay all documents from a collection with the help of find() method −\n> db.demo604.find();\nThis will produce the following output −\n{ \"_id\" : ObjectId(\"5e960887ed011c280a0905d8\"), \"FirstName\" : \"Chris\" }\n{ \"_id\" : ObjectId(\"5e96088aed011c280a0905d9\"), \"FirstName\" : \"Bob\" }\n{ \"_id\" : ObjectId(\"5e96088ded011c280a0905da\"), \"FirstName\" : \"David\" }"},"parsed":{"kind":"list like","value":[{"code":null,"e":1173,"s":1062,"text":"For this, set “unique:true” i.e. the unique constraint and avoid inserting duplicates as in the below syntax −"},{"code":null,"e":1259,"s":1173,"text":"db.yourCollectionName.ensureIndex({yourFieldName: 1}, {unique: true, dropDups: true})"},{"code":null,"e":1379,"s":1259,"text":"To understand the above syntax, let us create a collection with documents. Here, duplicate insertion won’t be allowed −"},{"code":null,"e":2697,"s":1379,"text":"> db.demo604.ensureIndex({FirstName: 1}, {unique: true, dropDups: true});{\n \"createdCollectionAutomatically\" : true,\n \"numIndexesBefore\" : 1,\n \"numIndexesAfter\" : 2,\n \"ok\" : 1\n}\n> db.demo604.insertOne({FirstName:\"Chris\"});{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e960887ed011c280a0905d8\")\n}\n> db.demo604.insertOne({FirstName:\"Bob\"});{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e96088aed011c280a0905d9\")\n}\n> db.demo604.insertOne({FirstName:\"David\"});{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e96088ded011c280a0905da\")\n}\n> db.demo604.insertOne({FirstName:\"Chris\"});\n2020-04-15T00:31:35.978+0530 E QUERY [js] WriteError: E11000 duplicate key error collection: test.demo604 index: FirstName_1 dup key: { : \"Chris\" } :\nWriteError({\n \"index\" : 0,\n \"code\" : 11000,\n \"errmsg\" : \"E11000 duplicate key error collection: test.demo604 index: FirstName_1 dup key: { : \\\"Chris\\\" }\",\n \"op\" : {\n \"_id\" : ObjectId(\"5e96088fed011c280a0905db\"),\n \"FirstName\" : \"Chris\"\n }\n})\nWriteError@src/mongo/shell/bulk_api.js:461:48\nBulk/mergeBatchResults@src/mongo/shell/bulk_api.js:841:49\nBulk/executeBatch@src/mongo/shell/bulk_api.js:906:13\nBulk/this.execute@src/mongo/shell/bulk_api.js:1150:21\nDBCollection.prototype.insertOne@src/mongo/shell/crud_api.js:252:9\n@(shell):1:1"},{"code":null,"e":2770,"s":2697,"text":"Display all documents from a collection with the help of find() method −"},{"code":null,"e":2791,"s":2770,"text":"> db.demo604.find();"},{"code":null,"e":2832,"s":2791,"text":"This will produce the following output −"},{"code":null,"e":3046,"s":2832,"text":"{ \"_id\" : ObjectId(\"5e960887ed011c280a0905d8\"), \"FirstName\" : \"Chris\" }\n{ \"_id\" : ObjectId(\"5e96088aed011c280a0905d9\"), \"FirstName\" : \"Bob\" }\n{ \"_id\" : ObjectId(\"5e96088ded011c280a0905da\"), \"FirstName\" : \"David\" }"}],"string":"[\n {\n \"code\": null,\n \"e\": 1173,\n \"s\": 1062,\n \"text\": \"For this, set “unique:true” i.e. the unique constraint and avoid inserting duplicates as in the below syntax −\"\n },\n {\n \"code\": null,\n \"e\": 1259,\n \"s\": 1173,\n \"text\": \"db.yourCollectionName.ensureIndex({yourFieldName: 1}, {unique: true, dropDups: true})\"\n },\n {\n \"code\": null,\n \"e\": 1379,\n \"s\": 1259,\n \"text\": \"To understand the above syntax, let us create a collection with documents. Here, duplicate insertion won’t be allowed −\"\n },\n {\n \"code\": null,\n \"e\": 2697,\n \"s\": 1379,\n \"text\": \"> db.demo604.ensureIndex({FirstName: 1}, {unique: true, dropDups: true});{\\n \\\"createdCollectionAutomatically\\\" : true,\\n \\\"numIndexesBefore\\\" : 1,\\n \\\"numIndexesAfter\\\" : 2,\\n \\\"ok\\\" : 1\\n}\\n> db.demo604.insertOne({FirstName:\\\"Chris\\\"});{\\n \\\"acknowledged\\\" : true,\\n \\\"insertedId\\\" : ObjectId(\\\"5e960887ed011c280a0905d8\\\")\\n}\\n> db.demo604.insertOne({FirstName:\\\"Bob\\\"});{\\n \\\"acknowledged\\\" : true,\\n \\\"insertedId\\\" : ObjectId(\\\"5e96088aed011c280a0905d9\\\")\\n}\\n> db.demo604.insertOne({FirstName:\\\"David\\\"});{\\n \\\"acknowledged\\\" : true,\\n \\\"insertedId\\\" : ObjectId(\\\"5e96088ded011c280a0905da\\\")\\n}\\n> db.demo604.insertOne({FirstName:\\\"Chris\\\"});\\n2020-04-15T00:31:35.978+0530 E QUERY [js] WriteError: E11000 duplicate key error collection: test.demo604 index: FirstName_1 dup key: { : \\\"Chris\\\" } :\\nWriteError({\\n \\\"index\\\" : 0,\\n \\\"code\\\" : 11000,\\n \\\"errmsg\\\" : \\\"E11000 duplicate key error collection: test.demo604 index: FirstName_1 dup key: { : \\\\\\\"Chris\\\\\\\" }\\\",\\n \\\"op\\\" : {\\n \\\"_id\\\" : ObjectId(\\\"5e96088fed011c280a0905db\\\"),\\n \\\"FirstName\\\" : \\\"Chris\\\"\\n }\\n})\\nWriteError@src/mongo/shell/bulk_api.js:461:48\\nBulk/mergeBatchResults@src/mongo/shell/bulk_api.js:841:49\\nBulk/executeBatch@src/mongo/shell/bulk_api.js:906:13\\nBulk/this.execute@src/mongo/shell/bulk_api.js:1150:21\\nDBCollection.prototype.insertOne@src/mongo/shell/crud_api.js:252:9\\n@(shell):1:1\"\n },\n {\n \"code\": null,\n \"e\": 2770,\n \"s\": 2697,\n \"text\": \"Display all documents from a collection with the help of find() method −\"\n },\n {\n \"code\": null,\n \"e\": 2791,\n \"s\": 2770,\n \"text\": \"> db.demo604.find();\"\n },\n {\n \"code\": null,\n \"e\": 2832,\n \"s\": 2791,\n \"text\": \"This will produce the following output −\"\n },\n {\n \"code\": null,\n \"e\": 3046,\n \"s\": 2832,\n \"text\": \"{ \\\"_id\\\" : ObjectId(\\\"5e960887ed011c280a0905d8\\\"), \\\"FirstName\\\" : \\\"Chris\\\" }\\n{ \\\"_id\\\" : ObjectId(\\\"5e96088aed011c280a0905d9\\\"), \\\"FirstName\\\" : \\\"Bob\\\" }\\n{ \\\"_id\\\" : ObjectId(\\\"5e96088ded011c280a0905da\\\"), \\\"FirstName\\\" : \\\"David\\\" }\"\n }\n]"}}},{"rowIdx":584,"cells":{"title":{"kind":"string","value":"Python | Find missing and additional values in two lists - GeeksforGeeks"},"text":{"kind":"string","value":"21 Nov, 2018\nGiven two lists, find the missing and additional values in both the lists.\nExamples:\nInput : list1 = [1, 2, 3, 4, 5, 6] \n list2 = [4, 5, 6, 7, 8] \nOutput : Missing values in list1 = [8, 7] \n Additional values in list1 = [1, 2, 3] \n Missing values in list2 = [1, 2, 3] \n Additional values in list2 = [7, 8] \n\nExplanation: \n\n\nApproach: To find the missing elements of list2 we need to get the difference of list1 from list2. To find the additional elements of list2, calculate the difference of list2 from list1.Similarly while finding missing elements of list1, calculate the difference of list2 from list1. To find the additional elements in list1, calculate the difference of list1 from list2.\nInsert the list1 and list2 to set and then use difference function in sets to get the required answer.\nPrerequisite : Python Set Difference\n# Python program to find the missing # and additional elements # examples of listslist1 = [1, 2, 3, 4, 5, 6]list2 = [4, 5, 6, 7, 8] # prints the missing and additional elements in list2 print(\"Missing values in second list:\", (set(list1).difference(list2)))print(\"Additional values in second list:\", (set(list2).difference(list1))) # prints the missing and additional elements in list1print(\"Missing values in first list:\", (set(list2).difference(list1)))print(\"Additional values in first list:\", (set(list1).difference(list2)))\nOutput:\nMissing values in second list: {1, 2, 3}\nAdditional values in second list: {7, 8}\nMissing values in first list: {7, 8}\nAdditional values in first list: {1, 2, 3}\n\nPython list-programs\nPython set-programs\npython-list\npython-set\nPython\npython-list\npython-set\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nPython Dictionary\nRead a file line by line in Python\nEnumerate() in Python\nHow to Install PIP on Windows ?\nIterate over a list in Python\nDifferent ways to create Pandas Dataframe\nPython String | replace()\nCreate a Pandas DataFrame from Lists\nPython program to convert a list to string\nReading and Writing to text files in Python"},"parsed":{"kind":"list like","value":[{"code":null,"e":24212,"s":24184,"text":"\n21 Nov, 2018"},{"code":null,"e":24287,"s":24212,"text":"Given two lists, find the missing and additional values in both the lists."},{"code":null,"e":24297,"s":24287,"text":"Examples:"},{"code":null,"e":24567,"s":24297,"text":"Input : list1 = [1, 2, 3, 4, 5, 6] \n list2 = [4, 5, 6, 7, 8] \nOutput : Missing values in list1 = [8, 7] \n Additional values in list1 = [1, 2, 3] \n Missing values in list2 = [1, 2, 3] \n Additional values in list2 = [7, 8] \n\nExplanation: \n\n"},{"code":null,"e":24938,"s":24567,"text":"Approach: To find the missing elements of list2 we need to get the difference of list1 from list2. To find the additional elements of list2, calculate the difference of list2 from list1.Similarly while finding missing elements of list1, calculate the difference of list2 from list1. To find the additional elements in list1, calculate the difference of list1 from list2."},{"code":null,"e":25041,"s":24938,"text":"Insert the list1 and list2 to set and then use difference function in sets to get the required answer."},{"code":null,"e":25078,"s":25041,"text":"Prerequisite : Python Set Difference"},{"code":"# Python program to find the missing # and additional elements # examples of listslist1 = [1, 2, 3, 4, 5, 6]list2 = [4, 5, 6, 7, 8] # prints the missing and additional elements in list2 print(\"Missing values in second list:\", (set(list1).difference(list2)))print(\"Additional values in second list:\", (set(list2).difference(list1))) # prints the missing and additional elements in list1print(\"Missing values in first list:\", (set(list2).difference(list1)))print(\"Additional values in first list:\", (set(list1).difference(list2)))","e":25611,"s":25078,"text":null},{"code":null,"e":25619,"s":25611,"text":"Output:"},{"code":null,"e":25782,"s":25619,"text":"Missing values in second list: {1, 2, 3}\nAdditional values in second list: {7, 8}\nMissing values in first list: {7, 8}\nAdditional values in first list: {1, 2, 3}\n"},{"code":null,"e":25803,"s":25782,"text":"Python list-programs"},{"code":null,"e":25823,"s":25803,"text":"Python set-programs"},{"code":null,"e":25835,"s":25823,"text":"python-list"},{"code":null,"e":25846,"s":25835,"text":"python-set"},{"code":null,"e":25853,"s":25846,"text":"Python"},{"code":null,"e":25865,"s":25853,"text":"python-list"},{"code":null,"e":25876,"s":25865,"text":"python-set"},{"code":null,"e":25974,"s":25876,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":25992,"s":25974,"text":"Python Dictionary"},{"code":null,"e":26027,"s":25992,"text":"Read a file line by line in Python"},{"code":null,"e":26049,"s":26027,"text":"Enumerate() in Python"},{"code":null,"e":26081,"s":26049,"text":"How to Install PIP on Windows ?"},{"code":null,"e":26111,"s":26081,"text":"Iterate over a list in Python"},{"code":null,"e":26153,"s":26111,"text":"Different ways to create Pandas Dataframe"},{"code":null,"e":26179,"s":26153,"text":"Python String | replace()"},{"code":null,"e":26216,"s":26179,"text":"Create a Pandas DataFrame from Lists"},{"code":null,"e":26259,"s":26216,"text":"Python program to convert a list to string"}],"string":"[\n {\n \"code\": null,\n \"e\": 24212,\n \"s\": 24184,\n \"text\": \"\\n21 Nov, 2018\"\n },\n {\n \"code\": null,\n \"e\": 24287,\n \"s\": 24212,\n \"text\": \"Given two lists, find the missing and additional values in both the lists.\"\n },\n {\n \"code\": null,\n \"e\": 24297,\n \"s\": 24287,\n \"text\": \"Examples:\"\n },\n {\n \"code\": null,\n \"e\": 24567,\n \"s\": 24297,\n \"text\": \"Input : list1 = [1, 2, 3, 4, 5, 6] \\n list2 = [4, 5, 6, 7, 8] \\nOutput : Missing values in list1 = [8, 7] \\n Additional values in list1 = [1, 2, 3] \\n Missing values in list2 = [1, 2, 3] \\n Additional values in list2 = [7, 8] \\n\\nExplanation: \\n\\n\"\n },\n {\n \"code\": null,\n \"e\": 24938,\n \"s\": 24567,\n \"text\": \"Approach: To find the missing elements of list2 we need to get the difference of list1 from list2. To find the additional elements of list2, calculate the difference of list2 from list1.Similarly while finding missing elements of list1, calculate the difference of list2 from list1. To find the additional elements in list1, calculate the difference of list1 from list2.\"\n },\n {\n \"code\": null,\n \"e\": 25041,\n \"s\": 24938,\n \"text\": \"Insert the list1 and list2 to set and then use difference function in sets to get the required answer.\"\n },\n {\n \"code\": null,\n \"e\": 25078,\n \"s\": 25041,\n \"text\": \"Prerequisite : Python Set Difference\"\n },\n {\n \"code\": \"# Python program to find the missing # and additional elements # examples of listslist1 = [1, 2, 3, 4, 5, 6]list2 = [4, 5, 6, 7, 8] # prints the missing and additional elements in list2 print(\\\"Missing values in second list:\\\", (set(list1).difference(list2)))print(\\\"Additional values in second list:\\\", (set(list2).difference(list1))) # prints the missing and additional elements in list1print(\\\"Missing values in first list:\\\", (set(list2).difference(list1)))print(\\\"Additional values in first list:\\\", (set(list1).difference(list2)))\",\n \"e\": 25611,\n \"s\": 25078,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 25619,\n \"s\": 25611,\n \"text\": \"Output:\"\n },\n {\n \"code\": null,\n \"e\": 25782,\n \"s\": 25619,\n \"text\": \"Missing values in second list: {1, 2, 3}\\nAdditional values in second list: {7, 8}\\nMissing values in first list: {7, 8}\\nAdditional values in first list: {1, 2, 3}\\n\"\n },\n {\n \"code\": null,\n \"e\": 25803,\n \"s\": 25782,\n \"text\": \"Python list-programs\"\n },\n {\n \"code\": null,\n \"e\": 25823,\n \"s\": 25803,\n \"text\": \"Python set-programs\"\n },\n {\n \"code\": null,\n \"e\": 25835,\n \"s\": 25823,\n \"text\": \"python-list\"\n },\n {\n \"code\": null,\n \"e\": 25846,\n \"s\": 25835,\n \"text\": \"python-set\"\n },\n {\n \"code\": null,\n \"e\": 25853,\n \"s\": 25846,\n \"text\": \"Python\"\n },\n {\n \"code\": null,\n \"e\": 25865,\n \"s\": 25853,\n \"text\": \"python-list\"\n },\n {\n \"code\": null,\n \"e\": 25876,\n \"s\": 25865,\n \"text\": \"python-set\"\n },\n {\n \"code\": null,\n \"e\": 25974,\n \"s\": 25876,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 25992,\n \"s\": 25974,\n \"text\": \"Python Dictionary\"\n },\n {\n \"code\": null,\n \"e\": 26027,\n \"s\": 25992,\n \"text\": \"Read a file line by line in Python\"\n },\n {\n \"code\": null,\n \"e\": 26049,\n \"s\": 26027,\n \"text\": \"Enumerate() in Python\"\n },\n {\n \"code\": null,\n \"e\": 26081,\n \"s\": 26049,\n \"text\": \"How to Install PIP on Windows ?\"\n },\n {\n \"code\": null,\n \"e\": 26111,\n \"s\": 26081,\n \"text\": \"Iterate over a list in Python\"\n },\n {\n \"code\": null,\n \"e\": 26153,\n \"s\": 26111,\n \"text\": \"Different ways to create Pandas Dataframe\"\n },\n {\n \"code\": null,\n \"e\": 26179,\n \"s\": 26153,\n \"text\": \"Python String | replace()\"\n },\n {\n \"code\": null,\n \"e\": 26216,\n \"s\": 26179,\n \"text\": \"Create a Pandas DataFrame from Lists\"\n },\n {\n \"code\": null,\n \"e\": 26259,\n \"s\": 26216,\n \"text\": \"Python program to convert a list to string\"\n }\n]"}}},{"rowIdx":585,"cells":{"title":{"kind":"string","value":"Largest Element in Array | Practice | GeeksforGeeks"},"text":{"kind":"string","value":"Given an array A[] of size n. The task is to find the largest element in it.\n \nExample 1:\nInput:\nn = 5\nA[] = {1, 8, 7, 56, 90}\nOutput:\n90\nExplanation:\nThe largest element of given array is 90.\n \nExample 2:\nInput:\nn = 7\nA[] = {1, 2, 0, 3, 2, 4, 5}\nOutput:\n5\nExplanation:\nThe largest element of given array is 5.\n \nYour Task: \nYou don't need to read input or print anything. Your task is to complete the function largest() which takes the array A[] and its size n as inputs and returns the maximum element in the array.\n \nExpected Time Complexity: O(N)\nExpected Auxiliary Space: O(1)\n \nConstraints:\n1 <= n<= 103\n0 <= A[i] <= 103\nArray may contain duplicate elements. \n0\nmdparvejalam687in 11 hours\nJava Solution\n \nint larger = arr[0]; for(int i=0; i\n#include\nint main() \n{\nint a[1000], i,n, max;\nprintf(\"enter size of the array:\") ;\nscanf(\"%d\", &n) ;\nprintf(\"enter elements in an array:\") ;\nfor(i=0;im) m=a[i]; } return m;\n0\nvikassinghwow1 week ago\nclass Compute { public int largest(int arr[], int n) { int largest=arr[0]; for(int i=1;ilargest){ largest = arr[i]; } } return largest; }\n0\njeetsorathiya2 weeks ago\nmy solution in java\n \njust use sort function\n \nArrays.sort(arr); return arr[n-1];\n+1\nakash2420182 weeks ago\n#python3 Code \n \ndef largest( arr, n): maxium=arr[0] for i in range (n): if(arr[i]>maxium): maxium=arr[i] return maxium\nWe strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?\n Login to access your submissions. \n\nProblem\n\n\nContest\n\nReset the IDE using the second button on the top right corner.\nAvoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.\nPassing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.\nYou can access the hints to get an idea about what is expected of you as well as the final solution code.\nYou can view the solutions submitted by other users from the submission tab."},"parsed":{"kind":"list like","value":[{"code":null,"e":317,"s":238,"text":"Given an array A[] of size n. The task is to find the largest element in it.\n "},{"code":null,"e":328,"s":317,"text":"Example 1:"},{"code":null,"e":431,"s":328,"text":"Input:\nn = 5\nA[] = {1, 8, 7, 56, 90}\nOutput:\n90\nExplanation:\nThe largest element of given array is 90."},{"code":null,"e":444,"s":433,"text":"Example 2:"},{"code":null,"e":549,"s":444,"text":"Input:\nn = 7\nA[] = {1, 2, 0, 3, 2, 4, 5}\nOutput:\n5\nExplanation:\nThe largest element of given array is 5."},{"code":null,"e":757,"s":551,"text":"Your Task: \nYou don't need to read input or print anything. Your task is to complete the function largest() which takes the array A[] and its size n as inputs and returns the maximum element in the array."},{"code":null,"e":821,"s":759,"text":"Expected Time Complexity: O(N)\nExpected Auxiliary Space: O(1)"},{"code":null,"e":905,"s":823,"text":"Constraints:\n1 <= n<= 103\n0 <= A[i] <= 103\nArray may contain duplicate elements. "},{"code":null,"e":907,"s":905,"text":"0"},{"code":null,"e":934,"s":907,"text":"mdparvejalam687in 11 hours"},{"code":null,"e":948,"s":934,"text":"Java Solution"},{"code":null,"e":1091,"s":950,"text":"int larger = arr[0]; for(int i=0; i"},{"code":null,"e":1154,"s":1136,"text":"#include"},{"code":null,"e":1166,"s":1154,"text":"int main() "},{"code":null,"e":1168,"s":1166,"text":"{"},{"code":null,"e":1191,"s":1168,"text":"int a[1000], i,n, max;"},{"code":null,"e":1228,"s":1191,"text":"printf(\"enter size of the array:\") ;"},{"code":null,"e":1246,"s":1228,"text":"scanf(\"%d\", &n) ;"},{"code":null,"e":1286,"s":1246,"text":"printf(\"enter elements in an array:\") ;"},{"code":null,"e":1304,"s":1286,"text":"for(i=0;im) m=a[i]; } return m;"},{"code":null,"e":1797,"s":1795,"text":"0"},{"code":null,"e":1821,"s":1797,"text":"vikassinghwow1 week ago"},{"code":null,"e":2071,"s":1821,"text":"class Compute { public int largest(int arr[], int n) { int largest=arr[0]; for(int i=1;ilargest){ largest = arr[i]; } } return largest; }"},{"code":null,"e":2713,"s":2711,"text":"0"},{"code":null,"e":2738,"s":2713,"text":"jeetsorathiya2 weeks ago"},{"code":null,"e":2758,"s":2738,"text":"my solution in java"},{"code":null,"e":2783,"s":2760,"text":"just use sort function"},{"code":null,"e":2822,"s":2785,"text":"Arrays.sort(arr); return arr[n-1];"},{"code":null,"e":2825,"s":2822,"text":"+1"},{"code":null,"e":2848,"s":2825,"text":"akash2420182 weeks ago"},{"code":null,"e":2863,"s":2848,"text":"#python3 Code "},{"code":null,"e":2990,"s":2865,"text":"def largest( arr, n): maxium=arr[0] for i in range (n): if(arr[i]>maxium): maxium=arr[i] return maxium"},{"code":null,"e":3136,"s":2990,"text":"We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"},{"code":null,"e":3172,"s":3136,"text":" Login to access your submissions. "},{"code":null,"e":3182,"s":3172,"text":"\nProblem\n"},{"code":null,"e":3192,"s":3182,"text":"\nContest\n"},{"code":null,"e":3255,"s":3192,"text":"Reset the IDE using the second button on the top right corner."},{"code":null,"e":3403,"s":3255,"text":"Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."},{"code":null,"e":3611,"s":3403,"text":"Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."},{"code":null,"e":3717,"s":3611,"text":"You can access the hints to get an idea about what is expected of you as well as the final solution code."}],"string":"[\n {\n \"code\": null,\n \"e\": 317,\n \"s\": 238,\n \"text\": \"Given an array A[] of size n. The task is to find the largest element in it.\\n \"\n },\n {\n \"code\": null,\n \"e\": 328,\n \"s\": 317,\n \"text\": \"Example 1:\"\n },\n {\n \"code\": null,\n \"e\": 431,\n \"s\": 328,\n \"text\": \"Input:\\nn = 5\\nA[] = {1, 8, 7, 56, 90}\\nOutput:\\n90\\nExplanation:\\nThe largest element of given array is 90.\"\n },\n {\n \"code\": null,\n \"e\": 444,\n \"s\": 433,\n \"text\": \"Example 2:\"\n },\n {\n \"code\": null,\n \"e\": 549,\n \"s\": 444,\n \"text\": \"Input:\\nn = 7\\nA[] = {1, 2, 0, 3, 2, 4, 5}\\nOutput:\\n5\\nExplanation:\\nThe largest element of given array is 5.\"\n },\n {\n \"code\": null,\n \"e\": 757,\n \"s\": 551,\n \"text\": \"Your Task: \\nYou don't need to read input or print anything. Your task is to complete the function largest() which takes the array A[] and its size n as inputs and returns the maximum element in the array.\"\n },\n {\n \"code\": null,\n \"e\": 821,\n \"s\": 759,\n \"text\": \"Expected Time Complexity: O(N)\\nExpected Auxiliary Space: O(1)\"\n },\n {\n \"code\": null,\n \"e\": 905,\n \"s\": 823,\n \"text\": \"Constraints:\\n1 <= n<= 103\\n0 <= A[i] <= 103\\nArray may contain duplicate elements. \"\n },\n {\n \"code\": null,\n \"e\": 907,\n \"s\": 905,\n \"text\": \"0\"\n },\n {\n \"code\": null,\n \"e\": 934,\n \"s\": 907,\n \"text\": \"mdparvejalam687in 11 hours\"\n },\n {\n \"code\": null,\n \"e\": 948,\n \"s\": 934,\n \"text\": \"Java Solution\"\n },\n {\n \"code\": null,\n \"e\": 1091,\n \"s\": 950,\n \"text\": \"int larger = arr[0]; for(int i=0; i\"\n },\n {\n \"code\": null,\n \"e\": 1154,\n \"s\": 1136,\n \"text\": \"#include\"\n },\n {\n \"code\": null,\n \"e\": 1166,\n \"s\": 1154,\n \"text\": \"int main() \"\n },\n {\n \"code\": null,\n \"e\": 1168,\n \"s\": 1166,\n \"text\": \"{\"\n },\n {\n \"code\": null,\n \"e\": 1191,\n \"s\": 1168,\n \"text\": \"int a[1000], i,n, max;\"\n },\n {\n \"code\": null,\n \"e\": 1228,\n \"s\": 1191,\n \"text\": \"printf(\\\"enter size of the array:\\\") ;\"\n },\n {\n \"code\": null,\n \"e\": 1246,\n \"s\": 1228,\n \"text\": \"scanf(\\\"%d\\\", &n) ;\"\n },\n {\n \"code\": null,\n \"e\": 1286,\n \"s\": 1246,\n \"text\": \"printf(\\\"enter elements in an array:\\\") ;\"\n },\n {\n \"code\": null,\n \"e\": 1304,\n \"s\": 1286,\n \"text\": \"for(i=0;im) m=a[i]; } return m;\"\n },\n {\n \"code\": null,\n \"e\": 1797,\n \"s\": 1795,\n \"text\": \"0\"\n },\n {\n \"code\": null,\n \"e\": 1821,\n \"s\": 1797,\n \"text\": \"vikassinghwow1 week ago\"\n },\n {\n \"code\": null,\n \"e\": 2071,\n \"s\": 1821,\n \"text\": \"class Compute { public int largest(int arr[], int n) { int largest=arr[0]; for(int i=1;ilargest){ largest = arr[i]; } } return largest; }\"\n },\n {\n \"code\": null,\n \"e\": 2713,\n \"s\": 2711,\n \"text\": \"0\"\n },\n {\n \"code\": null,\n \"e\": 2738,\n \"s\": 2713,\n \"text\": \"jeetsorathiya2 weeks ago\"\n },\n {\n \"code\": null,\n \"e\": 2758,\n \"s\": 2738,\n \"text\": \"my solution in java\"\n },\n {\n \"code\": null,\n \"e\": 2783,\n \"s\": 2760,\n \"text\": \"just use sort function\"\n },\n {\n \"code\": null,\n \"e\": 2822,\n \"s\": 2785,\n \"text\": \"Arrays.sort(arr); return arr[n-1];\"\n },\n {\n \"code\": null,\n \"e\": 2825,\n \"s\": 2822,\n \"text\": \"+1\"\n },\n {\n \"code\": null,\n \"e\": 2848,\n \"s\": 2825,\n \"text\": \"akash2420182 weeks ago\"\n },\n {\n \"code\": null,\n \"e\": 2863,\n \"s\": 2848,\n \"text\": \"#python3 Code \"\n },\n {\n \"code\": null,\n \"e\": 2990,\n \"s\": 2865,\n \"text\": \"def largest( arr, n): maxium=arr[0] for i in range (n): if(arr[i]>maxium): maxium=arr[i] return maxium\"\n },\n {\n \"code\": null,\n \"e\": 3136,\n \"s\": 2990,\n \"text\": \"We strongly recommend solving this problem on your own before viewing its editorial. Do you still\\n want to view the editorial?\"\n },\n {\n \"code\": null,\n \"e\": 3172,\n \"s\": 3136,\n \"text\": \" Login to access your submissions. \"\n },\n {\n \"code\": null,\n \"e\": 3182,\n \"s\": 3172,\n \"text\": \"\\nProblem\\n\"\n },\n {\n \"code\": null,\n \"e\": 3192,\n \"s\": 3182,\n \"text\": \"\\nContest\\n\"\n },\n {\n \"code\": null,\n \"e\": 3255,\n \"s\": 3192,\n \"text\": \"Reset the IDE using the second button on the top right corner.\"\n },\n {\n \"code\": null,\n \"e\": 3403,\n \"s\": 3255,\n \"text\": \"Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.\"\n },\n {\n \"code\": null,\n \"e\": 3611,\n \"s\": 3403,\n \"text\": \"Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.\"\n },\n {\n \"code\": null,\n \"e\": 3717,\n \"s\": 3611,\n \"text\": \"You can access the hints to get an idea about what is expected of you as well as the final solution code.\"\n }\n]"}}},{"rowIdx":586,"cells":{"title":{"kind":"string","value":"Cross Validation — Why & How. Importance Of Cross Validation In... | by Amitrajit Bose | Towards Data Science"},"text":{"kind":"string","value":"So, you have been working on an imbalanced data set for a few days now and trying out different machine learning models, training them on a part of your data set, testing their accuracy and you are ecstatic to see the score going above 0.95 every-time. Do you really think you have achieved 95% accuracy with your model?\nI’m assuming that you have performed top-notch pre-processing on your data-set, also you have removed any missing values or categorical values and noise. Whatsoever state-of-the-art algorithm you have used to build your hypothesis function and train the machine learning model, you have to evaluate its performance before moving forward with it.\nNow the easiest and fastest method to evaluate a model is to split the data set into training and testing set, train the model using the training set data and check its accuracy with the accuracy. And do not forget to shuffle the data set before performing the split. But this method is not at all an assurance, in simple words you cannot rely on this approach while finalizing a model. You might be wondering — Why?\nLet us consider, you are working on a spam emails data set, which contains 98% of spam emails and 2% of non-spam valid emails. In this case, even if you do not create any model but just classify every input as spam, you will be getting 0.98 accuracy. This condition is called accuracy paradox.\nImagine what would happen if this was a model for classification of tumour cells or chest X-rays and you had pushed a 98% accurate model to the market. Maybe this would have killed hundreds of patients, you never know.\nDo not worry and grab a cup of something warm. In the article below, I will be explaining the entire process of evaluation of your machine learning model. All you need to know is some basic Python syntax as a prerequisite.\nWe initially split up the entire data we have got into two sets, one is used to train the model upon and the other is kept as a holdout set which is used to check how the model behaves with completely unseen data. The figure below summarises the entire idea of performing the split.\nPlease note, that the train test ratio can be anything like 80:20, 75:25, 90:10, etc. This is a decision that the machine learning engineer has to take based on the amount of data. A good rule of thumb is to use 25% of the data-set for testing.\nYou can do this with ease using some Python and the open source Sci-kit Learn API.\nfrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42, shuffle = True, stratify = y)\nX is the original entire set of features and y is the entire set of corresponding true labels. The above function splits the entire set into train and test set with a ratio of 0.3 assigned for the test set. The parameter shuffle is set to true, thus the data set will be randomly shuffled before the split. The parameter stratify is recently added to Sci-kit Learn from v0.17 , it is essential when dealing with imbalanced data sets, such as the spam classification example. It makes a split so that the proportion of values in the sample produced will be the same as the proportion of values provided to the parameter stratify. For example, if the variable y is a binary categorical variable with values 0 and 1 and there are 10% of zeros and 90% of ones, stratify=y will make sure that your random split has 10% of 0's and 90% of 1's.\nAs we were discussing, only checking how many examples from the test set were classified correctly is not a useful metric for checking model performance because of factors such as class imbalance. We need a more robust and nuanced metric.\nSay hello to the confusion matrix. An easy and popular method of diagnosing model performance. Let us understand this with our scenario of spam email classification. The confusion matrix would look like this.\nThere are several metrics that can be deduced from the confusion matrix, such as —\nAccuracy = (TP + TN) /(TP + TN + FP + FN)Precision = (TP) / (TP + FP)Recall = (TP) / (TP + FN)F1 Score = (2 x Precision x Recall) / (Precision + Recall)— where TP is True Positive, FN is False Negative and likewise for the rest.\nPrecision is basically all the things that you said were relevant whereas Recall is all the things that are actually relevant. In other words, recall is also referred to as the sensitivity of your model, whereas precision is referred to as Positive Predicted Value. Here’s a one-pager cheat sheet to summarise it all.\nNow that you have grasped the concept, let's understand how to do it with ease using the Sci-kit Learn API and a few lines of Python.\nfrom sklearn.metrics import confusion_matrix, classification_reporty_pred = model.predict (X_test)print(confusion_matrix(y_test, y_pred))print(classification_report(y_test, y_pred))\nAssuming that you have prepared the model using the .fit() method on the training set (on which I shall write probably some other day), you then calculate the predicted set of labels using the .predict() method of the model. You have the original labels for these in y_test and you pass the two arrays into the above two functions. What you’ll get will be a two-by-two confusion matrix (because spam classification is binary classification) and a classification report which returns all the above-discussed metrics.\nNote: The true values are passed as the first parameter and the predicted values is the second parameter.\nCross validation is a technique for assessing how the statistical analysis generalises to an independent data set.It is a technique for evaluating machine learning models by training several models on subsets of the available input data and evaluating them on the complementary subset of the data. Using cross-validation, there are high chances that we can detect over-fitting with ease.\nThere are several cross validation techniques such as :-1. K-Fold Cross Validation2. Leave P-out Cross Validation3. Leave One-out Cross Validation4. Repeated Random Sub-sampling Method5. Holdout Method\nIn this post, we will discuss the most popular method of them i.e the K-Fold Cross Validation. The others are also very effective but less common to use.\nSo let’s take a minute to ask ourselves why we need cross-validation — We have been splitting the data set into a training set and testing set (or holdout set). But, the accuracy and metrics are highly biased upon how the split was performed, it depends upon whether the data set was shuffled, which part was taken for training and testing, how much, so on. Moreover, it is not representative of the model’s ability to generalize. This leads us to cross validation.\nFirst I would like to introduce you to a golden rule — “Never mix training and test data”. Your first step should always be to isolate the test data-set and use it only for final evaluation. Cross-validation will thus be performed on the training set.\nInitially, the entire training data set is broken up in k equal parts. The first part is kept as the hold out (testing) set and the remaining k-1 parts are used to train the model. Then the trained model is then tested on the holdout set. The above process is repeated k times, in each case we keep on changing the holdout set. Thus, every data point get an equal opportunity to be included in the test set.\nUsually, k is equal to 3 or 5. It can be extended even to higher values like 10 or 15 but it becomes extremely computationally expensive and time-consuming. Let us have a look at how we can implement this with a few lines of Python code and the Sci-kit Learn API.\nfrom sklearn.model_selection import cross_val_scoreprint(cross_val_score(model, X_train, y_train, cv=5))\nWe pass the model or classifier object, the features, the labels and the parameter cv which indicates the K for K-Fold cross-validation. The method will return a list of k accuracy values for each iteration. In general, we take the average of them and use it as a consolidated cross-validation score.\nimport numpy as npprint(np.mean(cross_val_score(model, X_train, y_train, cv=5)))\nAlthough it might be computationally expensive, cross-validation is essential for evaluating the performance of the learning model.\nFeel free to have a look at the other cross-validation score evaluation methods which I have included in the references section, at the end of this article.\nThe accuracy requirement of a machine learning model varies across industry, domain, requirement and problem statement. But confirming a final model should never be performed without evaluation of all essential metrics.\nBy the way, once you are done with evaluation and finally confirming your machine learning model, you should re-use the test data that was initially isolated for testing purposes only and train your model with the complete data you have so as to increase the chances for a better prediction.\nThanks for reading. This was a high-level overview of the topic, I tried to put my best efforts to explain the concepts at hand in an easy way. Please feel free to comment, criticize and suggest improvements to the article. Also, claps highly encourage me to write more! Stay tuned for more articles.\nHave a look at this friendly introduction to neural networks with PyTorch.\n[1]Leave P-out Cross Validation[2]Leave One-out Cross Validation[3]Repeated Random Sub-sampling Method[4]Holdout Method[5]Cross Validation[6]Stanford’s MOOC — Statistical Learning With R — Course notes"},"parsed":{"kind":"list like","value":[{"code":null,"e":493,"s":172,"text":"So, you have been working on an imbalanced data set for a few days now and trying out different machine learning models, training them on a part of your data set, testing their accuracy and you are ecstatic to see the score going above 0.95 every-time. Do you really think you have achieved 95% accuracy with your model?"},{"code":null,"e":839,"s":493,"text":"I’m assuming that you have performed top-notch pre-processing on your data-set, also you have removed any missing values or categorical values and noise. Whatsoever state-of-the-art algorithm you have used to build your hypothesis function and train the machine learning model, you have to evaluate its performance before moving forward with it."},{"code":null,"e":1256,"s":839,"text":"Now the easiest and fastest method to evaluate a model is to split the data set into training and testing set, train the model using the training set data and check its accuracy with the accuracy. And do not forget to shuffle the data set before performing the split. But this method is not at all an assurance, in simple words you cannot rely on this approach while finalizing a model. You might be wondering — Why?"},{"code":null,"e":1550,"s":1256,"text":"Let us consider, you are working on a spam emails data set, which contains 98% of spam emails and 2% of non-spam valid emails. In this case, even if you do not create any model but just classify every input as spam, you will be getting 0.98 accuracy. This condition is called accuracy paradox."},{"code":null,"e":1769,"s":1550,"text":"Imagine what would happen if this was a model for classification of tumour cells or chest X-rays and you had pushed a 98% accurate model to the market. Maybe this would have killed hundreds of patients, you never know."},{"code":null,"e":1992,"s":1769,"text":"Do not worry and grab a cup of something warm. In the article below, I will be explaining the entire process of evaluation of your machine learning model. All you need to know is some basic Python syntax as a prerequisite."},{"code":null,"e":2275,"s":1992,"text":"We initially split up the entire data we have got into two sets, one is used to train the model upon and the other is kept as a holdout set which is used to check how the model behaves with completely unseen data. The figure below summarises the entire idea of performing the split."},{"code":null,"e":2520,"s":2275,"text":"Please note, that the train test ratio can be anything like 80:20, 75:25, 90:10, etc. This is a decision that the machine learning engineer has to take based on the amount of data. A good rule of thumb is to use 25% of the data-set for testing."},{"code":null,"e":2603,"s":2520,"text":"You can do this with ease using some Python and the open source Sci-kit Learn API."},{"code":null,"e":2779,"s":2603,"text":"from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42, shuffle = True, stratify = y)"},{"code":null,"e":3616,"s":2779,"text":"X is the original entire set of features and y is the entire set of corresponding true labels. The above function splits the entire set into train and test set with a ratio of 0.3 assigned for the test set. The parameter shuffle is set to true, thus the data set will be randomly shuffled before the split. The parameter stratify is recently added to Sci-kit Learn from v0.17 , it is essential when dealing with imbalanced data sets, such as the spam classification example. It makes a split so that the proportion of values in the sample produced will be the same as the proportion of values provided to the parameter stratify. For example, if the variable y is a binary categorical variable with values 0 and 1 and there are 10% of zeros and 90% of ones, stratify=y will make sure that your random split has 10% of 0's and 90% of 1's."},{"code":null,"e":3855,"s":3616,"text":"As we were discussing, only checking how many examples from the test set were classified correctly is not a useful metric for checking model performance because of factors such as class imbalance. We need a more robust and nuanced metric."},{"code":null,"e":4064,"s":3855,"text":"Say hello to the confusion matrix. An easy and popular method of diagnosing model performance. Let us understand this with our scenario of spam email classification. The confusion matrix would look like this."},{"code":null,"e":4147,"s":4064,"text":"There are several metrics that can be deduced from the confusion matrix, such as —"},{"code":null,"e":4377,"s":4147,"text":"Accuracy = (TP + TN) /(TP + TN + FP + FN)Precision = (TP) / (TP + FP)Recall = (TP) / (TP + FN)F1 Score = (2 x Precision x Recall) / (Precision + Recall)— where TP is True Positive, FN is False Negative and likewise for the rest."},{"code":null,"e":4695,"s":4377,"text":"Precision is basically all the things that you said were relevant whereas Recall is all the things that are actually relevant. In other words, recall is also referred to as the sensitivity of your model, whereas precision is referred to as Positive Predicted Value. Here’s a one-pager cheat sheet to summarise it all."},{"code":null,"e":4829,"s":4695,"text":"Now that you have grasped the concept, let's understand how to do it with ease using the Sci-kit Learn API and a few lines of Python."},{"code":null,"e":5011,"s":4829,"text":"from sklearn.metrics import confusion_matrix, classification_reporty_pred = model.predict (X_test)print(confusion_matrix(y_test, y_pred))print(classification_report(y_test, y_pred))"},{"code":null,"e":5527,"s":5011,"text":"Assuming that you have prepared the model using the .fit() method on the training set (on which I shall write probably some other day), you then calculate the predicted set of labels using the .predict() method of the model. You have the original labels for these in y_test and you pass the two arrays into the above two functions. What you’ll get will be a two-by-two confusion matrix (because spam classification is binary classification) and a classification report which returns all the above-discussed metrics."},{"code":null,"e":5633,"s":5527,"text":"Note: The true values are passed as the first parameter and the predicted values is the second parameter."},{"code":null,"e":6021,"s":5633,"text":"Cross validation is a technique for assessing how the statistical analysis generalises to an independent data set.It is a technique for evaluating machine learning models by training several models on subsets of the available input data and evaluating them on the complementary subset of the data. Using cross-validation, there are high chances that we can detect over-fitting with ease."},{"code":null,"e":6223,"s":6021,"text":"There are several cross validation techniques such as :-1. K-Fold Cross Validation2. Leave P-out Cross Validation3. Leave One-out Cross Validation4. Repeated Random Sub-sampling Method5. Holdout Method"},{"code":null,"e":6377,"s":6223,"text":"In this post, we will discuss the most popular method of them i.e the K-Fold Cross Validation. The others are also very effective but less common to use."},{"code":null,"e":6843,"s":6377,"text":"So let’s take a minute to ask ourselves why we need cross-validation — We have been splitting the data set into a training set and testing set (or holdout set). But, the accuracy and metrics are highly biased upon how the split was performed, it depends upon whether the data set was shuffled, which part was taken for training and testing, how much, so on. Moreover, it is not representative of the model’s ability to generalize. This leads us to cross validation."},{"code":null,"e":7095,"s":6843,"text":"First I would like to introduce you to a golden rule — “Never mix training and test data”. Your first step should always be to isolate the test data-set and use it only for final evaluation. Cross-validation will thus be performed on the training set."},{"code":null,"e":7503,"s":7095,"text":"Initially, the entire training data set is broken up in k equal parts. The first part is kept as the hold out (testing) set and the remaining k-1 parts are used to train the model. Then the trained model is then tested on the holdout set. The above process is repeated k times, in each case we keep on changing the holdout set. Thus, every data point get an equal opportunity to be included in the test set."},{"code":null,"e":7767,"s":7503,"text":"Usually, k is equal to 3 or 5. It can be extended even to higher values like 10 or 15 but it becomes extremely computationally expensive and time-consuming. Let us have a look at how we can implement this with a few lines of Python code and the Sci-kit Learn API."},{"code":null,"e":7872,"s":7767,"text":"from sklearn.model_selection import cross_val_scoreprint(cross_val_score(model, X_train, y_train, cv=5))"},{"code":null,"e":8173,"s":7872,"text":"We pass the model or classifier object, the features, the labels and the parameter cv which indicates the K for K-Fold cross-validation. The method will return a list of k accuracy values for each iteration. In general, we take the average of them and use it as a consolidated cross-validation score."},{"code":null,"e":8254,"s":8173,"text":"import numpy as npprint(np.mean(cross_val_score(model, X_train, y_train, cv=5)))"},{"code":null,"e":8386,"s":8254,"text":"Although it might be computationally expensive, cross-validation is essential for evaluating the performance of the learning model."},{"code":null,"e":8543,"s":8386,"text":"Feel free to have a look at the other cross-validation score evaluation methods which I have included in the references section, at the end of this article."},{"code":null,"e":8763,"s":8543,"text":"The accuracy requirement of a machine learning model varies across industry, domain, requirement and problem statement. But confirming a final model should never be performed without evaluation of all essential metrics."},{"code":null,"e":9055,"s":8763,"text":"By the way, once you are done with evaluation and finally confirming your machine learning model, you should re-use the test data that was initially isolated for testing purposes only and train your model with the complete data you have so as to increase the chances for a better prediction."},{"code":null,"e":9356,"s":9055,"text":"Thanks for reading. This was a high-level overview of the topic, I tried to put my best efforts to explain the concepts at hand in an easy way. Please feel free to comment, criticize and suggest improvements to the article. Also, claps highly encourage me to write more! Stay tuned for more articles."},{"code":null,"e":9431,"s":9356,"text":"Have a look at this friendly introduction to neural networks with PyTorch."}],"string":"[\n {\n \"code\": null,\n \"e\": 493,\n \"s\": 172,\n \"text\": \"So, you have been working on an imbalanced data set for a few days now and trying out different machine learning models, training them on a part of your data set, testing their accuracy and you are ecstatic to see the score going above 0.95 every-time. Do you really think you have achieved 95% accuracy with your model?\"\n },\n {\n \"code\": null,\n \"e\": 839,\n \"s\": 493,\n \"text\": \"I’m assuming that you have performed top-notch pre-processing on your data-set, also you have removed any missing values or categorical values and noise. Whatsoever state-of-the-art algorithm you have used to build your hypothesis function and train the machine learning model, you have to evaluate its performance before moving forward with it.\"\n },\n {\n \"code\": null,\n \"e\": 1256,\n \"s\": 839,\n \"text\": \"Now the easiest and fastest method to evaluate a model is to split the data set into training and testing set, train the model using the training set data and check its accuracy with the accuracy. And do not forget to shuffle the data set before performing the split. But this method is not at all an assurance, in simple words you cannot rely on this approach while finalizing a model. You might be wondering — Why?\"\n },\n {\n \"code\": null,\n \"e\": 1550,\n \"s\": 1256,\n \"text\": \"Let us consider, you are working on a spam emails data set, which contains 98% of spam emails and 2% of non-spam valid emails. In this case, even if you do not create any model but just classify every input as spam, you will be getting 0.98 accuracy. This condition is called accuracy paradox.\"\n },\n {\n \"code\": null,\n \"e\": 1769,\n \"s\": 1550,\n \"text\": \"Imagine what would happen if this was a model for classification of tumour cells or chest X-rays and you had pushed a 98% accurate model to the market. Maybe this would have killed hundreds of patients, you never know.\"\n },\n {\n \"code\": null,\n \"e\": 1992,\n \"s\": 1769,\n \"text\": \"Do not worry and grab a cup of something warm. In the article below, I will be explaining the entire process of evaluation of your machine learning model. All you need to know is some basic Python syntax as a prerequisite.\"\n },\n {\n \"code\": null,\n \"e\": 2275,\n \"s\": 1992,\n \"text\": \"We initially split up the entire data we have got into two sets, one is used to train the model upon and the other is kept as a holdout set which is used to check how the model behaves with completely unseen data. The figure below summarises the entire idea of performing the split.\"\n },\n {\n \"code\": null,\n \"e\": 2520,\n \"s\": 2275,\n \"text\": \"Please note, that the train test ratio can be anything like 80:20, 75:25, 90:10, etc. This is a decision that the machine learning engineer has to take based on the amount of data. A good rule of thumb is to use 25% of the data-set for testing.\"\n },\n {\n \"code\": null,\n \"e\": 2603,\n \"s\": 2520,\n \"text\": \"You can do this with ease using some Python and the open source Sci-kit Learn API.\"\n },\n {\n \"code\": null,\n \"e\": 2779,\n \"s\": 2603,\n \"text\": \"from sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42, shuffle = True, stratify = y)\"\n },\n {\n \"code\": null,\n \"e\": 3616,\n \"s\": 2779,\n \"text\": \"X is the original entire set of features and y is the entire set of corresponding true labels. The above function splits the entire set into train and test set with a ratio of 0.3 assigned for the test set. The parameter shuffle is set to true, thus the data set will be randomly shuffled before the split. The parameter stratify is recently added to Sci-kit Learn from v0.17 , it is essential when dealing with imbalanced data sets, such as the spam classification example. It makes a split so that the proportion of values in the sample produced will be the same as the proportion of values provided to the parameter stratify. For example, if the variable y is a binary categorical variable with values 0 and 1 and there are 10% of zeros and 90% of ones, stratify=y will make sure that your random split has 10% of 0's and 90% of 1's.\"\n },\n {\n \"code\": null,\n \"e\": 3855,\n \"s\": 3616,\n \"text\": \"As we were discussing, only checking how many examples from the test set were classified correctly is not a useful metric for checking model performance because of factors such as class imbalance. We need a more robust and nuanced metric.\"\n },\n {\n \"code\": null,\n \"e\": 4064,\n \"s\": 3855,\n \"text\": \"Say hello to the confusion matrix. An easy and popular method of diagnosing model performance. Let us understand this with our scenario of spam email classification. The confusion matrix would look like this.\"\n },\n {\n \"code\": null,\n \"e\": 4147,\n \"s\": 4064,\n \"text\": \"There are several metrics that can be deduced from the confusion matrix, such as —\"\n },\n {\n \"code\": null,\n \"e\": 4377,\n \"s\": 4147,\n \"text\": \"Accuracy = (TP + TN) /(TP + TN + FP + FN)Precision = (TP) / (TP + FP)Recall = (TP) / (TP + FN)F1 Score = (2 x Precision x Recall) / (Precision + Recall)— where TP is True Positive, FN is False Negative and likewise for the rest.\"\n },\n {\n \"code\": null,\n \"e\": 4695,\n \"s\": 4377,\n \"text\": \"Precision is basically all the things that you said were relevant whereas Recall is all the things that are actually relevant. In other words, recall is also referred to as the sensitivity of your model, whereas precision is referred to as Positive Predicted Value. Here’s a one-pager cheat sheet to summarise it all.\"\n },\n {\n \"code\": null,\n \"e\": 4829,\n \"s\": 4695,\n \"text\": \"Now that you have grasped the concept, let's understand how to do it with ease using the Sci-kit Learn API and a few lines of Python.\"\n },\n {\n \"code\": null,\n \"e\": 5011,\n \"s\": 4829,\n \"text\": \"from sklearn.metrics import confusion_matrix, classification_reporty_pred = model.predict (X_test)print(confusion_matrix(y_test, y_pred))print(classification_report(y_test, y_pred))\"\n },\n {\n \"code\": null,\n \"e\": 5527,\n \"s\": 5011,\n \"text\": \"Assuming that you have prepared the model using the .fit() method on the training set (on which I shall write probably some other day), you then calculate the predicted set of labels using the .predict() method of the model. You have the original labels for these in y_test and you pass the two arrays into the above two functions. What you’ll get will be a two-by-two confusion matrix (because spam classification is binary classification) and a classification report which returns all the above-discussed metrics.\"\n },\n {\n \"code\": null,\n \"e\": 5633,\n \"s\": 5527,\n \"text\": \"Note: The true values are passed as the first parameter and the predicted values is the second parameter.\"\n },\n {\n \"code\": null,\n \"e\": 6021,\n \"s\": 5633,\n \"text\": \"Cross validation is a technique for assessing how the statistical analysis generalises to an independent data set.It is a technique for evaluating machine learning models by training several models on subsets of the available input data and evaluating them on the complementary subset of the data. Using cross-validation, there are high chances that we can detect over-fitting with ease.\"\n },\n {\n \"code\": null,\n \"e\": 6223,\n \"s\": 6021,\n \"text\": \"There are several cross validation techniques such as :-1. K-Fold Cross Validation2. Leave P-out Cross Validation3. Leave One-out Cross Validation4. Repeated Random Sub-sampling Method5. Holdout Method\"\n },\n {\n \"code\": null,\n \"e\": 6377,\n \"s\": 6223,\n \"text\": \"In this post, we will discuss the most popular method of them i.e the K-Fold Cross Validation. The others are also very effective but less common to use.\"\n },\n {\n \"code\": null,\n \"e\": 6843,\n \"s\": 6377,\n \"text\": \"So let’s take a minute to ask ourselves why we need cross-validation — We have been splitting the data set into a training set and testing set (or holdout set). But, the accuracy and metrics are highly biased upon how the split was performed, it depends upon whether the data set was shuffled, which part was taken for training and testing, how much, so on. Moreover, it is not representative of the model’s ability to generalize. This leads us to cross validation.\"\n },\n {\n \"code\": null,\n \"e\": 7095,\n \"s\": 6843,\n \"text\": \"First I would like to introduce you to a golden rule — “Never mix training and test data”. Your first step should always be to isolate the test data-set and use it only for final evaluation. Cross-validation will thus be performed on the training set.\"\n },\n {\n \"code\": null,\n \"e\": 7503,\n \"s\": 7095,\n \"text\": \"Initially, the entire training data set is broken up in k equal parts. The first part is kept as the hold out (testing) set and the remaining k-1 parts are used to train the model. Then the trained model is then tested on the holdout set. The above process is repeated k times, in each case we keep on changing the holdout set. Thus, every data point get an equal opportunity to be included in the test set.\"\n },\n {\n \"code\": null,\n \"e\": 7767,\n \"s\": 7503,\n \"text\": \"Usually, k is equal to 3 or 5. It can be extended even to higher values like 10 or 15 but it becomes extremely computationally expensive and time-consuming. Let us have a look at how we can implement this with a few lines of Python code and the Sci-kit Learn API.\"\n },\n {\n \"code\": null,\n \"e\": 7872,\n \"s\": 7767,\n \"text\": \"from sklearn.model_selection import cross_val_scoreprint(cross_val_score(model, X_train, y_train, cv=5))\"\n },\n {\n \"code\": null,\n \"e\": 8173,\n \"s\": 7872,\n \"text\": \"We pass the model or classifier object, the features, the labels and the parameter cv which indicates the K for K-Fold cross-validation. The method will return a list of k accuracy values for each iteration. In general, we take the average of them and use it as a consolidated cross-validation score.\"\n },\n {\n \"code\": null,\n \"e\": 8254,\n \"s\": 8173,\n \"text\": \"import numpy as npprint(np.mean(cross_val_score(model, X_train, y_train, cv=5)))\"\n },\n {\n \"code\": null,\n \"e\": 8386,\n \"s\": 8254,\n \"text\": \"Although it might be computationally expensive, cross-validation is essential for evaluating the performance of the learning model.\"\n },\n {\n \"code\": null,\n \"e\": 8543,\n \"s\": 8386,\n \"text\": \"Feel free to have a look at the other cross-validation score evaluation methods which I have included in the references section, at the end of this article.\"\n },\n {\n \"code\": null,\n \"e\": 8763,\n \"s\": 8543,\n \"text\": \"The accuracy requirement of a machine learning model varies across industry, domain, requirement and problem statement. But confirming a final model should never be performed without evaluation of all essential metrics.\"\n },\n {\n \"code\": null,\n \"e\": 9055,\n \"s\": 8763,\n \"text\": \"By the way, once you are done with evaluation and finally confirming your machine learning model, you should re-use the test data that was initially isolated for testing purposes only and train your model with the complete data you have so as to increase the chances for a better prediction.\"\n },\n {\n \"code\": null,\n \"e\": 9356,\n \"s\": 9055,\n \"text\": \"Thanks for reading. This was a high-level overview of the topic, I tried to put my best efforts to explain the concepts at hand in an easy way. Please feel free to comment, criticize and suggest improvements to the article. Also, claps highly encourage me to write more! Stay tuned for more articles.\"\n },\n {\n \"code\": null,\n \"e\": 9431,\n \"s\": 9356,\n \"text\": \"Have a look at this friendly introduction to neural networks with PyTorch.\"\n }\n]"}}},{"rowIdx":587,"cells":{"title":{"kind":"string","value":"Area of Largest rectangle that can be inscribed in an Ellipse?"},"text":{"kind":"string","value":"Here we will see the area of largest rectangle that can be inscribed in an ellipse. The rectangle in ellipse will be like below −\nThe a and b are the half of major and minor axis of the ellipse. The upper right corner of the rectangle is (x, y). So the area is\nNow, after making this equation as f(x) and maximizing the area, we will get the area as\n#include \n#include \nusing namespace std;\nfloat area(float a, float b) {\n if (a < 0 || b < 0 ) //if the valuse are negative it is invalid\n return -1;\n float area = 2*a*b;\n return area;\n}\nint main() {\n float a = 10, b = 8;\n cout << \"Area : \" << area(a, b);\n}\nArea : 160"},"parsed":{"kind":"list like","value":[{"code":null,"e":1192,"s":1062,"text":"Here we will see the area of largest rectangle that can be inscribed in an ellipse. The rectangle in ellipse will be like below −"},{"code":null,"e":1323,"s":1192,"text":"The a and b are the half of major and minor axis of the ellipse. The upper right corner of the rectangle is (x, y). So the area is"},{"code":null,"e":1412,"s":1323,"text":"Now, after making this equation as f(x) and maximizing the area, we will get the area as"},{"code":null,"e":1701,"s":1412,"text":"#include \n#include \nusing namespace std;\nfloat area(float a, float b) {\n if (a < 0 || b < 0 ) //if the valuse are negative it is invalid\n return -1;\n float area = 2*a*b;\n return area;\n}\nint main() {\n float a = 10, b = 8;\n cout << \"Area : \" << area(a, b);\n}"},{"code":null,"e":1712,"s":1701,"text":"Area : 160"}],"string":"[\n {\n \"code\": null,\n \"e\": 1192,\n \"s\": 1062,\n \"text\": \"Here we will see the area of largest rectangle that can be inscribed in an ellipse. The rectangle in ellipse will be like below −\"\n },\n {\n \"code\": null,\n \"e\": 1323,\n \"s\": 1192,\n \"text\": \"The a and b are the half of major and minor axis of the ellipse. The upper right corner of the rectangle is (x, y). So the area is\"\n },\n {\n \"code\": null,\n \"e\": 1412,\n \"s\": 1323,\n \"text\": \"Now, after making this equation as f(x) and maximizing the area, we will get the area as\"\n },\n {\n \"code\": null,\n \"e\": 1701,\n \"s\": 1412,\n \"text\": \"#include \\n#include \\nusing namespace std;\\nfloat area(float a, float b) {\\n if (a < 0 || b < 0 ) //if the valuse are negative it is invalid\\n return -1;\\n float area = 2*a*b;\\n return area;\\n}\\nint main() {\\n float a = 10, b = 8;\\n cout << \\\"Area : \\\" << area(a, b);\\n}\"\n },\n {\n \"code\": null,\n \"e\": 1712,\n \"s\": 1701,\n \"text\": \"Area : 160\"\n }\n]"}}},{"rowIdx":588,"cells":{"title":{"kind":"string","value":"Explain switch statement in C language"},"text":{"kind":"string","value":"It is used to select one among multiple decisions. ‘switch’ successively tests a value against a list of integers (or) character constant. When a match is found, the statement (or) statements associated with that value are executed.\nThe syntax is given below −\nswitch (expression){\n case value1 : stmt1;\n break;\n case value2 : stmt2;\n break;\n - - - - - -\n default : stmt – x;\n}\nRefer the algorithm given below −\nStep 1: Declare variables.\nStep 2: Read expression variable.\nStep 3: Switch(expression)\n If value 1 is select : stmt 1 executes break (exists from switch)\n If value 2 is select : stmt 2 executes ;break\n If value 3 is select : stmt 3 executes; break\n ...................................................\nDefault : stmt-x executes;\nThe following C program demonstrates the usage of switch statement −\n Live Demo\n#include\nmain ( ){\n int n;\n printf (\"enter a number\");\n scanf (\"%d\", &n);\n switch (n){\n case 0 : printf (\"zero\");\n break;\n case 1 : printf (\"one\");\n break;\n default : printf (\"wrong choice\");\n }\n}\nYou will see the following output −\nenter a number\n1\nOne\nConsider another program on switch case as mentioned below −\n Live Demo\n#include\nint main(){\n char grade;\n printf(\"Enter the grade of a student:\\n\");\n scanf(\"%c\",&grade);\n switch(grade){\n case 'A': printf(\"Distiction\\n\");\n break;\n case 'B': printf(\"First class\\n\");\n break;\n case 'C': printf(\"second class \\n\");\n break;\n case 'D': printf(\"third class\\n\");\n break;\n default : printf(\"Fail\");\n }\n printf(\"Student grade=%c\",grade);\n return 0;\n}\nYou will see the following output −\nRun 1:Enter the grade of a student:A\nDistiction\nStudent grade=A\nRun 2: Enter the grade of a student:C\nSecond class\nStudent grade=C"},"parsed":{"kind":"list like","value":[{"code":null,"e":1295,"s":1062,"text":"It is used to select one among multiple decisions. ‘switch’ successively tests a value against a list of integers (or) character constant. When a match is found, the statement (or) statements associated with that value are executed."},{"code":null,"e":1323,"s":1295,"text":"The syntax is given below −"},{"code":null,"e":1458,"s":1323,"text":"switch (expression){\n case value1 : stmt1;\n break;\n case value2 : stmt2;\n break;\n - - - - - -\n default : stmt – x;\n}"},{"code":null,"e":1492,"s":1458,"text":"Refer the algorithm given below −"},{"code":null,"e":1829,"s":1492,"text":"Step 1: Declare variables.\nStep 2: Read expression variable.\nStep 3: Switch(expression)\n If value 1 is select : stmt 1 executes break (exists from switch)\n If value 2 is select : stmt 2 executes ;break\n If value 3 is select : stmt 3 executes; break\n ...................................................\nDefault : stmt-x executes;"},{"code":null,"e":1898,"s":1829,"text":"The following C program demonstrates the usage of switch statement −"},{"code":null,"e":1909,"s":1898,"text":" Live Demo"},{"code":null,"e":2156,"s":1909,"text":"#include\nmain ( ){\n int n;\n printf (\"enter a number\");\n scanf (\"%d\", &n);\n switch (n){\n case 0 : printf (\"zero\");\n break;\n case 1 : printf (\"one\");\n break;\n default : printf (\"wrong choice\");\n }\n}"},{"code":null,"e":2192,"s":2156,"text":"You will see the following output −"},{"code":null,"e":2213,"s":2192,"text":"enter a number\n1\nOne"},{"code":null,"e":2274,"s":2213,"text":"Consider another program on switch case as mentioned below −"},{"code":null,"e":2285,"s":2274,"text":" Live Demo"},{"code":null,"e":2735,"s":2285,"text":"#include\nint main(){\n char grade;\n printf(\"Enter the grade of a student:\\n\");\n scanf(\"%c\",&grade);\n switch(grade){\n case 'A': printf(\"Distiction\\n\");\n break;\n case 'B': printf(\"First class\\n\");\n break;\n case 'C': printf(\"second class \\n\");\n break;\n case 'D': printf(\"third class\\n\");\n break;\n default : printf(\"Fail\");\n }\n printf(\"Student grade=%c\",grade);\n return 0;\n}"},{"code":null,"e":2771,"s":2735,"text":"You will see the following output −"},{"code":null,"e":2902,"s":2771,"text":"Run 1:Enter the grade of a student:A\nDistiction\nStudent grade=A\nRun 2: Enter the grade of a student:C\nSecond class\nStudent grade=C"}],"string":"[\n {\n \"code\": null,\n \"e\": 1295,\n \"s\": 1062,\n \"text\": \"It is used to select one among multiple decisions. ‘switch’ successively tests a value against a list of integers (or) character constant. When a match is found, the statement (or) statements associated with that value are executed.\"\n },\n {\n \"code\": null,\n \"e\": 1323,\n \"s\": 1295,\n \"text\": \"The syntax is given below −\"\n },\n {\n \"code\": null,\n \"e\": 1458,\n \"s\": 1323,\n \"text\": \"switch (expression){\\n case value1 : stmt1;\\n break;\\n case value2 : stmt2;\\n break;\\n - - - - - -\\n default : stmt – x;\\n}\"\n },\n {\n \"code\": null,\n \"e\": 1492,\n \"s\": 1458,\n \"text\": \"Refer the algorithm given below −\"\n },\n {\n \"code\": null,\n \"e\": 1829,\n \"s\": 1492,\n \"text\": \"Step 1: Declare variables.\\nStep 2: Read expression variable.\\nStep 3: Switch(expression)\\n If value 1 is select : stmt 1 executes break (exists from switch)\\n If value 2 is select : stmt 2 executes ;break\\n If value 3 is select : stmt 3 executes; break\\n ...................................................\\nDefault : stmt-x executes;\"\n },\n {\n \"code\": null,\n \"e\": 1898,\n \"s\": 1829,\n \"text\": \"The following C program demonstrates the usage of switch statement −\"\n },\n {\n \"code\": null,\n \"e\": 1909,\n \"s\": 1898,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 2156,\n \"s\": 1909,\n \"text\": \"#include\\nmain ( ){\\n int n;\\n printf (\\\"enter a number\\\");\\n scanf (\\\"%d\\\", &n);\\n switch (n){\\n case 0 : printf (\\\"zero\\\");\\n break;\\n case 1 : printf (\\\"one\\\");\\n break;\\n default : printf (\\\"wrong choice\\\");\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2192,\n \"s\": 2156,\n \"text\": \"You will see the following output −\"\n },\n {\n \"code\": null,\n \"e\": 2213,\n \"s\": 2192,\n \"text\": \"enter a number\\n1\\nOne\"\n },\n {\n \"code\": null,\n \"e\": 2274,\n \"s\": 2213,\n \"text\": \"Consider another program on switch case as mentioned below −\"\n },\n {\n \"code\": null,\n \"e\": 2285,\n \"s\": 2274,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 2735,\n \"s\": 2285,\n \"text\": \"#include\\nint main(){\\n char grade;\\n printf(\\\"Enter the grade of a student:\\\\n\\\");\\n scanf(\\\"%c\\\",&grade);\\n switch(grade){\\n case 'A': printf(\\\"Distiction\\\\n\\\");\\n break;\\n case 'B': printf(\\\"First class\\\\n\\\");\\n break;\\n case 'C': printf(\\\"second class \\\\n\\\");\\n break;\\n case 'D': printf(\\\"third class\\\\n\\\");\\n break;\\n default : printf(\\\"Fail\\\");\\n }\\n printf(\\\"Student grade=%c\\\",grade);\\n return 0;\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2771,\n \"s\": 2735,\n \"text\": \"You will see the following output −\"\n },\n {\n \"code\": null,\n \"e\": 2902,\n \"s\": 2771,\n \"text\": \"Run 1:Enter the grade of a student:A\\nDistiction\\nStudent grade=A\\nRun 2: Enter the grade of a student:C\\nSecond class\\nStudent grade=C\"\n }\n]"}}},{"rowIdx":589,"cells":{"title":{"kind":"string","value":"TimeSpan.Compare() Method in C#"},"text":{"kind":"string","value":"The TimeSpan.Compare() method in C# is used to compare two TimeSpan values\nand returns an integer that indicates whether the first value is shorter than, equal to, or longer than the second value.\nThe return value is -1 if span1 is shorter than span2, 0 if span1=span2, whereas 1 if span1 is longer than span2.\nThe syntax is as follows −\npublic static int Compare (TimeSpan span1, TimeSpan span2);\nAbove, the parameter span1 is the 1st time interval to compare, whereas span2 is the 2nd interval to compare.\nLet us now see an example −\n Live Demo\nusing System;\npublic class Demo {\n public static void Main(){\n TimeSpan span1 = new TimeSpan(-6, 25, 0);\n TimeSpan span2 = new TimeSpan(1, 11, 25, 20);\n TimeSpan span3 = TimeSpan.MinValue;\n TimeSpan res1 = span1.Add(span2);\n TimeSpan res2 = span2.Add(span3);\n Console.WriteLine(\"Final Timespan (TimeSpan1 + TimeSpan2) = \"+res1);\n Console.WriteLine(\"Final Timespan (TimeSpan2 + TimeSpan3) = \"+res2);\n Console.WriteLine(\"Result (Comparison of span1 and span2) = \"+TimeSpan.Compare(span1, span2));\n }\n}\nThis will produce the following output −\nFinal Timespan (TimeSpan1 + TimeSpan2) = 1.05:50:20\nFinal Timespan (TimeSpan2 + TimeSpan3) = -10675197.15:22:45.4775808\nResult (Comparison of span1 and span2) = -1\nLet us now see another example −\n Live Demo\nusing System;\npublic class Demo {\n public static void Main(){\n TimeSpan span1 = new TimeSpan(-6, 25, 0);\n TimeSpan span2 = new TimeSpan(1, 10, 0);\n Console.WriteLine(\"Result (Comparison of span1 and span2) = \"+TimeSpan.Compare(span1, span2));\n }\n}\nThis will produce the following output −\nResult (Comparison of span1 and span2) = -1"},"parsed":{"kind":"list like","value":[{"code":null,"e":1259,"s":1062,"text":"The TimeSpan.Compare() method in C# is used to compare two TimeSpan values\nand returns an integer that indicates whether the first value is shorter than, equal to, or longer than the second value."},{"code":null,"e":1373,"s":1259,"text":"The return value is -1 if span1 is shorter than span2, 0 if span1=span2, whereas 1 if span1 is longer than span2."},{"code":null,"e":1400,"s":1373,"text":"The syntax is as follows −"},{"code":null,"e":1460,"s":1400,"text":"public static int Compare (TimeSpan span1, TimeSpan span2);"},{"code":null,"e":1570,"s":1460,"text":"Above, the parameter span1 is the 1st time interval to compare, whereas span2 is the 2nd interval to compare."},{"code":null,"e":1598,"s":1570,"text":"Let us now see an example −"},{"code":null,"e":1609,"s":1598,"text":" Live Demo"},{"code":null,"e":2153,"s":1609,"text":"using System;\npublic class Demo {\n public static void Main(){\n TimeSpan span1 = new TimeSpan(-6, 25, 0);\n TimeSpan span2 = new TimeSpan(1, 11, 25, 20);\n TimeSpan span3 = TimeSpan.MinValue;\n TimeSpan res1 = span1.Add(span2);\n TimeSpan res2 = span2.Add(span3);\n Console.WriteLine(\"Final Timespan (TimeSpan1 + TimeSpan2) = \"+res1);\n Console.WriteLine(\"Final Timespan (TimeSpan2 + TimeSpan3) = \"+res2);\n Console.WriteLine(\"Result (Comparison of span1 and span2) = \"+TimeSpan.Compare(span1, span2));\n }\n}"},{"code":null,"e":2194,"s":2153,"text":"This will produce the following output −"},{"code":null,"e":2358,"s":2194,"text":"Final Timespan (TimeSpan1 + TimeSpan2) = 1.05:50:20\nFinal Timespan (TimeSpan2 + TimeSpan3) = -10675197.15:22:45.4775808\nResult (Comparison of span1 and span2) = -1"},{"code":null,"e":2391,"s":2358,"text":"Let us now see another example −"},{"code":null,"e":2402,"s":2391,"text":" Live Demo"},{"code":null,"e":2669,"s":2402,"text":"using System;\npublic class Demo {\n public static void Main(){\n TimeSpan span1 = new TimeSpan(-6, 25, 0);\n TimeSpan span2 = new TimeSpan(1, 10, 0);\n Console.WriteLine(\"Result (Comparison of span1 and span2) = \"+TimeSpan.Compare(span1, span2));\n }\n}"},{"code":null,"e":2710,"s":2669,"text":"This will produce the following output −"},{"code":null,"e":2754,"s":2710,"text":"Result (Comparison of span1 and span2) = -1"}],"string":"[\n {\n \"code\": null,\n \"e\": 1259,\n \"s\": 1062,\n \"text\": \"The TimeSpan.Compare() method in C# is used to compare two TimeSpan values\\nand returns an integer that indicates whether the first value is shorter than, equal to, or longer than the second value.\"\n },\n {\n \"code\": null,\n \"e\": 1373,\n \"s\": 1259,\n \"text\": \"The return value is -1 if span1 is shorter than span2, 0 if span1=span2, whereas 1 if span1 is longer than span2.\"\n },\n {\n \"code\": null,\n \"e\": 1400,\n \"s\": 1373,\n \"text\": \"The syntax is as follows −\"\n },\n {\n \"code\": null,\n \"e\": 1460,\n \"s\": 1400,\n \"text\": \"public static int Compare (TimeSpan span1, TimeSpan span2);\"\n },\n {\n \"code\": null,\n \"e\": 1570,\n \"s\": 1460,\n \"text\": \"Above, the parameter span1 is the 1st time interval to compare, whereas span2 is the 2nd interval to compare.\"\n },\n {\n \"code\": null,\n \"e\": 1598,\n \"s\": 1570,\n \"text\": \"Let us now see an example −\"\n },\n {\n \"code\": null,\n \"e\": 1609,\n \"s\": 1598,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 2153,\n \"s\": 1609,\n \"text\": \"using System;\\npublic class Demo {\\n public static void Main(){\\n TimeSpan span1 = new TimeSpan(-6, 25, 0);\\n TimeSpan span2 = new TimeSpan(1, 11, 25, 20);\\n TimeSpan span3 = TimeSpan.MinValue;\\n TimeSpan res1 = span1.Add(span2);\\n TimeSpan res2 = span2.Add(span3);\\n Console.WriteLine(\\\"Final Timespan (TimeSpan1 + TimeSpan2) = \\\"+res1);\\n Console.WriteLine(\\\"Final Timespan (TimeSpan2 + TimeSpan3) = \\\"+res2);\\n Console.WriteLine(\\\"Result (Comparison of span1 and span2) = \\\"+TimeSpan.Compare(span1, span2));\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2194,\n \"s\": 2153,\n \"text\": \"This will produce the following output −\"\n },\n {\n \"code\": null,\n \"e\": 2358,\n \"s\": 2194,\n \"text\": \"Final Timespan (TimeSpan1 + TimeSpan2) = 1.05:50:20\\nFinal Timespan (TimeSpan2 + TimeSpan3) = -10675197.15:22:45.4775808\\nResult (Comparison of span1 and span2) = -1\"\n },\n {\n \"code\": null,\n \"e\": 2391,\n \"s\": 2358,\n \"text\": \"Let us now see another example −\"\n },\n {\n \"code\": null,\n \"e\": 2402,\n \"s\": 2391,\n \"text\": \" Live Demo\"\n },\n {\n \"code\": null,\n \"e\": 2669,\n \"s\": 2402,\n \"text\": \"using System;\\npublic class Demo {\\n public static void Main(){\\n TimeSpan span1 = new TimeSpan(-6, 25, 0);\\n TimeSpan span2 = new TimeSpan(1, 10, 0);\\n Console.WriteLine(\\\"Result (Comparison of span1 and span2) = \\\"+TimeSpan.Compare(span1, span2));\\n }\\n}\"\n },\n {\n \"code\": null,\n \"e\": 2710,\n \"s\": 2669,\n \"text\": \"This will produce the following output −\"\n },\n {\n \"code\": null,\n \"e\": 2754,\n \"s\": 2710,\n \"text\": \"Result (Comparison of span1 and span2) = -1\"\n }\n]"}}},{"rowIdx":590,"cells":{"title":{"kind":"string","value":"DateFormat format() Method in Java with Examples - GeeksforGeeks"},"text":{"kind":"string","value":"13 Jan, 2022\nDateFormat class present inside java.text package is an abstract class that is used to format and parse dates for any locale. It allows us to format date to text and parse text to date. DateFormat class provides many functionalities to obtain, format, parse default date/time. DateFormat class extends Format class that means it is a subclass of Format class. Since DateFormat class is an abstract class, therefore, it can be used for date/time formatting subclasses, which format and parses dates or times in a language-independent manner. \nThe format() method of DateFormat class in Java is used to format a given date into a Date/Time string. Basically, the method is used to convert this date and time into a particular format i.e., mm/dd/yyyy.\nSyntax: \npublic final String format(Date date)\nParameters: The method takes one parameter date of the Date object type and refers to the date whose string output is to be produced.\nReturn Type: Returns Date or time in string format of mm/dd/yyyy.\nExample 1:\nJava\n// Java Program to Illustrate format() Method// of DateTime Class // Importing required classesimport java.text.*;import java.util.Calendar; // Main class// DateFormat_Demopublic class GFG { // Main driver method public static void main(String[] args) { // Initializing the first formatter DateFormat DFormat = DateFormat.getDateInstance(); // Initializing the calender Object Calendar cal = Calendar.getInstance(); // Displaying the actual date System.out.println(\"The original Date: \" + cal.getTime()); // Converting date using format() method String curr_date = DFormat.format(cal.getTime()); // Printing the formatted date System.out.println(\"Formatted Date: \" + curr_date); }}\nThe original Date: Wed Mar 27 11:12:29 UTC 2019\nFormatted Date: Mar 27, 2019\n \nExample 2:\nJava\n// Java Program to Illustrate format() Method// of DateTime Class // Importing required classesimport java.text.*;import java.util.*; // Main class// DateFormat_Demopublic class GFG { // Main driver method public static void main(String[] args) { // Initializing the first formatter DateFormat DFormat = DateFormat.getDateTimeInstance( DateFormat.LONG, DateFormat.LONG, Locale.getDefault()); // Initializing the calender Object Calendar cal = Calendar.getInstance(); // Displaying the actual date System.out.println(\"The original Date: \" + cal.getTime()); // Converting date using format() method and // storing date in a string String curr_date = DFormat.format(cal.getTime()); // Printing the formatted date on console System.out.println(\"Formatted Date: \" + curr_date); }}\nThe original Date: Tue Jan 11 05:42:29 UTC 2022\nFormatted Date: January 11, 2022 at 5:42:29 AM UTC\nsolankimayank\nJava - util package\nJava-DateFormat\nJava-Functions\nJava\nJava\nWriting code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here.\nComments\nOld Comments\nHashMap in Java with Examples\nInitialize an ArrayList in Java\nObject Oriented Programming (OOPs) Concept in Java\nInterfaces in Java\nArrayList in Java\nHow to iterate any Map in Java\nMultidimensional Arrays in Java\nSingleton Class in Java\nStack Class in Java\nSet in Java"},"parsed":{"kind":"list like","value":[{"code":null,"e":24508,"s":24480,"text":"\n13 Jan, 2022"},{"code":null,"e":25050,"s":24508,"text":"DateFormat class present inside java.text package is an abstract class that is used to format and parse dates for any locale. It allows us to format date to text and parse text to date. DateFormat class provides many functionalities to obtain, format, parse default date/time. DateFormat class extends Format class that means it is a subclass of Format class. Since DateFormat class is an abstract class, therefore, it can be used for date/time formatting subclasses, which format and parses dates or times in a language-independent manner. "},{"code":null,"e":25257,"s":25050,"text":"The format() method of DateFormat class in Java is used to format a given date into a Date/Time string. Basically, the method is used to convert this date and time into a particular format i.e., mm/dd/yyyy."},{"code":null,"e":25266,"s":25257,"text":"Syntax: "},{"code":null,"e":25304,"s":25266,"text":"public final String format(Date date)"},{"code":null,"e":25438,"s":25304,"text":"Parameters: The method takes one parameter date of the Date object type and refers to the date whose string output is to be produced."},{"code":null,"e":25504,"s":25438,"text":"Return Type: Returns Date or time in string format of mm/dd/yyyy."},{"code":null,"e":25515,"s":25504,"text":"Example 1:"},{"code":null,"e":25520,"s":25515,"text":"Java"},{"code":"// Java Program to Illustrate format() Method// of DateTime Class // Importing required classesimport java.text.*;import java.util.Calendar; // Main class// DateFormat_Demopublic class GFG { // Main driver method public static void main(String[] args) { // Initializing the first formatter DateFormat DFormat = DateFormat.getDateInstance(); // Initializing the calender Object Calendar cal = Calendar.getInstance(); // Displaying the actual date System.out.println(\"The original Date: \" + cal.getTime()); // Converting date using format() method String curr_date = DFormat.format(cal.getTime()); // Printing the formatted date System.out.println(\"Formatted Date: \" + curr_date); }}","e":26316,"s":25520,"text":null},{"code":null,"e":26393,"s":26316,"text":"The original Date: Wed Mar 27 11:12:29 UTC 2019\nFormatted Date: Mar 27, 2019"},{"code":null,"e":26406,"s":26395,"text":"Example 2:"},{"code":null,"e":26411,"s":26406,"text":"Java"},{"code":"// Java Program to Illustrate format() Method// of DateTime Class // Importing required classesimport java.text.*;import java.util.*; // Main class// DateFormat_Demopublic class GFG { // Main driver method public static void main(String[] args) { // Initializing the first formatter DateFormat DFormat = DateFormat.getDateTimeInstance( DateFormat.LONG, DateFormat.LONG, Locale.getDefault()); // Initializing the calender Object Calendar cal = Calendar.getInstance(); // Displaying the actual date System.out.println(\"The original Date: \" + cal.getTime()); // Converting date using format() method and // storing date in a string String curr_date = DFormat.format(cal.getTime()); // Printing the formatted date on console System.out.println(\"Formatted Date: \" + curr_date); }}","e":27329,"s":26411,"text":null},{"code":null,"e":27428,"s":27329,"text":"The original Date: Tue Jan 11 05:42:29 UTC 2022\nFormatted Date: January 11, 2022 at 5:42:29 AM UTC"},{"code":null,"e":27442,"s":27428,"text":"solankimayank"},{"code":null,"e":27462,"s":27442,"text":"Java - util package"},{"code":null,"e":27478,"s":27462,"text":"Java-DateFormat"},{"code":null,"e":27493,"s":27478,"text":"Java-Functions"},{"code":null,"e":27498,"s":27493,"text":"Java"},{"code":null,"e":27503,"s":27498,"text":"Java"},{"code":null,"e":27601,"s":27503,"text":"Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."},{"code":null,"e":27610,"s":27601,"text":"Comments"},{"code":null,"e":27623,"s":27610,"text":"Old Comments"},{"code":null,"e":27653,"s":27623,"text":"HashMap in Java with Examples"},{"code":null,"e":27685,"s":27653,"text":"Initialize an ArrayList in Java"},{"code":null,"e":27736,"s":27685,"text":"Object Oriented Programming (OOPs) Concept in Java"},{"code":null,"e":27755,"s":27736,"text":"Interfaces in Java"},{"code":null,"e":27773,"s":27755,"text":"ArrayList in Java"},{"code":null,"e":27804,"s":27773,"text":"How to iterate any Map in Java"},{"code":null,"e":27836,"s":27804,"text":"Multidimensional Arrays in Java"},{"code":null,"e":27860,"s":27836,"text":"Singleton Class in Java"},{"code":null,"e":27880,"s":27860,"text":"Stack Class in Java"}],"string":"[\n {\n \"code\": null,\n \"e\": 24508,\n \"s\": 24480,\n \"text\": \"\\n13 Jan, 2022\"\n },\n {\n \"code\": null,\n \"e\": 25050,\n \"s\": 24508,\n \"text\": \"DateFormat class present inside java.text package is an abstract class that is used to format and parse dates for any locale. It allows us to format date to text and parse text to date. DateFormat class provides many functionalities to obtain, format, parse default date/time. DateFormat class extends Format class that means it is a subclass of Format class. Since DateFormat class is an abstract class, therefore, it can be used for date/time formatting subclasses, which format and parses dates or times in a language-independent manner. \"\n },\n {\n \"code\": null,\n \"e\": 25257,\n \"s\": 25050,\n \"text\": \"The format() method of DateFormat class in Java is used to format a given date into a Date/Time string. Basically, the method is used to convert this date and time into a particular format i.e., mm/dd/yyyy.\"\n },\n {\n \"code\": null,\n \"e\": 25266,\n \"s\": 25257,\n \"text\": \"Syntax: \"\n },\n {\n \"code\": null,\n \"e\": 25304,\n \"s\": 25266,\n \"text\": \"public final String format(Date date)\"\n },\n {\n \"code\": null,\n \"e\": 25438,\n \"s\": 25304,\n \"text\": \"Parameters: The method takes one parameter date of the Date object type and refers to the date whose string output is to be produced.\"\n },\n {\n \"code\": null,\n \"e\": 25504,\n \"s\": 25438,\n \"text\": \"Return Type: Returns Date or time in string format of mm/dd/yyyy.\"\n },\n {\n \"code\": null,\n \"e\": 25515,\n \"s\": 25504,\n \"text\": \"Example 1:\"\n },\n {\n \"code\": null,\n \"e\": 25520,\n \"s\": 25515,\n \"text\": \"Java\"\n },\n {\n \"code\": \"// Java Program to Illustrate format() Method// of DateTime Class // Importing required classesimport java.text.*;import java.util.Calendar; // Main class// DateFormat_Demopublic class GFG { // Main driver method public static void main(String[] args) { // Initializing the first formatter DateFormat DFormat = DateFormat.getDateInstance(); // Initializing the calender Object Calendar cal = Calendar.getInstance(); // Displaying the actual date System.out.println(\\\"The original Date: \\\" + cal.getTime()); // Converting date using format() method String curr_date = DFormat.format(cal.getTime()); // Printing the formatted date System.out.println(\\\"Formatted Date: \\\" + curr_date); }}\",\n \"e\": 26316,\n \"s\": 25520,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 26393,\n \"s\": 26316,\n \"text\": \"The original Date: Wed Mar 27 11:12:29 UTC 2019\\nFormatted Date: Mar 27, 2019\"\n },\n {\n \"code\": null,\n \"e\": 26406,\n \"s\": 26395,\n \"text\": \"Example 2:\"\n },\n {\n \"code\": null,\n \"e\": 26411,\n \"s\": 26406,\n \"text\": \"Java\"\n },\n {\n \"code\": \"// Java Program to Illustrate format() Method// of DateTime Class // Importing required classesimport java.text.*;import java.util.*; // Main class// DateFormat_Demopublic class GFG { // Main driver method public static void main(String[] args) { // Initializing the first formatter DateFormat DFormat = DateFormat.getDateTimeInstance( DateFormat.LONG, DateFormat.LONG, Locale.getDefault()); // Initializing the calender Object Calendar cal = Calendar.getInstance(); // Displaying the actual date System.out.println(\\\"The original Date: \\\" + cal.getTime()); // Converting date using format() method and // storing date in a string String curr_date = DFormat.format(cal.getTime()); // Printing the formatted date on console System.out.println(\\\"Formatted Date: \\\" + curr_date); }}\",\n \"e\": 27329,\n \"s\": 26411,\n \"text\": null\n },\n {\n \"code\": null,\n \"e\": 27428,\n \"s\": 27329,\n \"text\": \"The original Date: Tue Jan 11 05:42:29 UTC 2022\\nFormatted Date: January 11, 2022 at 5:42:29 AM UTC\"\n },\n {\n \"code\": null,\n \"e\": 27442,\n \"s\": 27428,\n \"text\": \"solankimayank\"\n },\n {\n \"code\": null,\n \"e\": 27462,\n \"s\": 27442,\n \"text\": \"Java - util package\"\n },\n {\n \"code\": null,\n \"e\": 27478,\n \"s\": 27462,\n \"text\": \"Java-DateFormat\"\n },\n {\n \"code\": null,\n \"e\": 27493,\n \"s\": 27478,\n \"text\": \"Java-Functions\"\n },\n {\n \"code\": null,\n \"e\": 27498,\n \"s\": 27493,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 27503,\n \"s\": 27498,\n \"text\": \"Java\"\n },\n {\n \"code\": null,\n \"e\": 27601,\n \"s\": 27503,\n \"text\": \"Writing code in comment?\\nPlease use ide.geeksforgeeks.org,\\ngenerate link and share the link here.\"\n },\n {\n \"code\": null,\n \"e\": 27610,\n \"s\": 27601,\n \"text\": \"Comments\"\n },\n {\n \"code\": null,\n \"e\": 27623,\n \"s\": 27610,\n \"text\": \"Old Comments\"\n },\n {\n \"code\": null,\n \"e\": 27653,\n \"s\": 27623,\n \"text\": \"HashMap in Java with Examples\"\n },\n {\n \"code\": null,\n \"e\": 27685,\n \"s\": 27653,\n \"text\": \"Initialize an ArrayList in Java\"\n },\n {\n \"code\": null,\n \"e\": 27736,\n \"s\": 27685,\n \"text\": \"Object Oriented Programming (OOPs) Concept in Java\"\n },\n {\n \"code\": null,\n \"e\": 27755,\n \"s\": 27736,\n \"text\": \"Interfaces in Java\"\n },\n {\n \"code\": null,\n \"e\": 27773,\n \"s\": 27755,\n \"text\": \"ArrayList in Java\"\n },\n {\n \"code\": null,\n \"e\": 27804,\n \"s\": 27773,\n \"text\": \"How to iterate any Map in Java\"\n },\n {\n \"code\": null,\n \"e\": 27836,\n \"s\": 27804,\n \"text\": \"Multidimensional Arrays in Java\"\n },\n {\n \"code\": null,\n \"e\": 27860,\n \"s\": 27836,\n \"text\": \"Singleton Class in Java\"\n },\n {\n \"code\": null,\n \"e\": 27880,\n \"s\": 27860,\n \"text\": \"Stack Class in Java\"\n }\n]"}}},{"rowIdx":591,"cells":{"title":{"kind":"string","value":"Focal Loss & Class Imbalance Data: TensorFlow | Towards Data Science"},"text":{"kind":"string","value":"In machine learning sometimes we are dealt with a very good hand like MNIST fashion data or CIFAR-10 data where the examples of each class in the data-set are well balanced. What happens if in a classification problem the distribution of examples across the known classes are biased or skewed ? Such problems with severe to slight bias in the data-set are common and today we will discuss an approach to handle such class imbalanced data. Let’s consider an extreme case of imbalanced data-set of mails and we build a classifier to detect spam mails. Since spam mails are relatively rarer, let’s consider 5% of all mails are spams. If we just a write a simple one line code as —\ndef detectspam(mail-data): return ‘not spam’ \nThis will give us right answer 95% of time and even though this is an extreme hyperbole but you get the problem. Most importantly, training any model with this data will lead to high confidence prediction of the general mails and due to extreme low number of spam mails in the training data, the model will likely not learn to predict the spam mails correctly. This is why precision, recall, F1 score, ROC/AUC curves are the important metrics that truly tell us the story. As you have already guessed one way to reduce this issue is to do sampling to balance the data-set so that classes are balanced. There are several other ways to address class imbalance problem in machine learning and an excellent comprehensive review has been put together by Jason Brownlee, check it here.\nIn case of computer vision problem, this class imbalance problem can be more critical and here we discuss how the authors approached object detection tasks that lead to the development of focal loss. In case of Fast R-CNN type of algorithms, first we run an image through ConvNet to obtain a feature map and then region proposal is performed (generally around 2K regions) on the high resolution feature map. These are 2-stage detectors and when the Focal Loss paper was introduced the intriguing question was whether one stage detector like YOLO or SSD could obtain same accuracy as 2-stage detectors? One stage detectors were fast but the accuracy during that time was around 10–40% of the 2-stage detectors. The authors suggested that class imbalance during training as the main obstacle that prevents the one stage detectors to obtain same accuracy as 2-stage detectors.\nAn example of such class imbalance is shown in the self-explanatory Figure 1, which is taken from the presentation itself by the original authors. They found that one stage detectors perform better when there are higher number of bounding boxes covering the space of possible objects. But this approach caused a major problem as the foreground and background data are not equally distributed. For example if we consider 20000 bounding boxes mostly 7–10 of them will actually contain any info about the object and the remaining will be containing background and, mostly they will be easy to classify but uninformative. Here, the authors found out that Loss function (e.g. Cross-Entropy) is the main reason that the easy examples will distract the training. Below is a pictorial representation\nEven though the wrongly classified samples are penalized more (red arrow in fig. 1) than the correct ones (green arrow), in the dense object detection settings, due to the imbalanced sample size, the loss function is overwhelmed with background (easy samples). The Focal Loss addresses this problem and it is designed in such a way so that it reduces the loss (‘down-weight’) for the easy examples and thus the network can focus on training the hard examples. Below is the definition of Focal Loss —\nIn focal loss, there’s a modulating factor multiplied to the Cross-Entropy loss. When a sample is misclassified, p (which represents model’s estimated probability for the class with label y = 1) is low and the modulating factor is near 1 and, the loss is unaffected. As p→1, the modulating factor approaches 0 and the loss for well-classified examples is down-weighted. The effect of γ parameter is shown in the plot below —\nTo quote from the paper —\nThe modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives low loss.\nTo understand this we will compare Cross-Entropy (CE) loss and Focal Loss using the definition above with γ = 2. Consider true value 1.0, and we consider 3 prediction values 0.90 (close), 0.95 (very close), 0.20 (far from true). Let’s see the loss values below using TensorFlow—\nCE loss when pred is close to true: 0.10536041CE loss when pred is very close to true: 0.051293183CE loss when pred is far from true: 1.6094373focal loss when pred is close to true: 0.0010536041110754007focal loss when pred is very close to true: 0.00012823295779526255focal loss when pred is far from true: 1.0300399017333985\nHere we see that compared to CE loss, the modulating factor in focal loss plays an important role. When prediction is close to the truth the loss is penalized way more than when when it is far. Importantly when prediction is 0.90 focal loss will be 0.01 × CE loss but when prediction is 0.95, focal loss will be around 0.002 × CE loss. Now we get a picture how focal loss reduces the loss contribution from easy examples and extends the range in which an example receives low loss. This can also be seen from fig. 3. Now we will use a real-world class imbalanced data-set and see focal loss in action.\nData-set Description: Here I have considered an extreme class-imbalanced data-set available in Kaggle and the data-set contains transactions made by credit cards in September 2013 by European cardholders. Let’s use pandas —\nThis data-set presents transactions that occurred in two days and we have 284,807 number of transactions. Features V1, V2,...V28 are the principal components obtained with PCA (original features are not provided due to confidential issues) and the only features which have not been transformed with PCA are ‘Time’ and ‘Amount’. Feature ‘Time’ contains the seconds elapsed between each transaction and the first transaction in the dataset and the feature ‘Amount’ is the transaction amount. Feature ‘Class’ is the response variable and it takes value 1 in case of fraud and 0 otherwise.\nClass Imbalance: Let’s plot the distribution of the ‘Class’ feature which tells us how many transactions are real and fake. As shown in figure 4 above, overwhelming numbers of transactions are real. Let’s get the numbers with this simple piece of code —\nprint (‘real cases:‘, len(credit_df[credit_df[‘Class’]==0]))print (‘fraud cases: ‘, len(credit_df[credit_df[‘Class’]==1]))>>> real cases: 284315 fraud cases: 492\nSo the class imbalance ratio is about 1:578, so for 578 real transactions we have one fraud case. First let’s use a simple neural network with cross-entropy loss to predict fraud and real transactions. But before that a little examination tells us that ‘Amount’ and ‘Time’ features are not scaled whereas other features ‘V1’, ‘V2’...etc are scaled. Here we can use StandardScaler/RobustScaler to scale these features and since RobustScaler are robust to outliers, I chose this standardization technique.\nLet’s now choose the features and label as below —\nX_labels = credit_df.drop([‘Class’], axis=1)y_labels = credit_df[‘Class’]X_labels = X_labels.to_numpy(dtype=np.float64)y_labels = y_labels.to_numpy(dtype=np.float64)y_lab_cat = tf.keras.utils.to_categorical(y_labels, num_classes=2, dtype=’float32')\nFor the train-test split we use stratify to keep the ratio of labels —\nx_train, x_test, y_train, y_test = train_test_split(X_labels, y_lab_cat, test_size=0.3, stratify=y_lab_cat, shuffle=True)\nNow we build a simple neural-net model with 3 dense layers —\ndef simple_model(): input_data = Input(shape=(x_train.shape[1], )) x = Dense(64)(input_data) x = Activation(activations.relu)(x) x = Dense(32)(x) x = Activation(activations.relu)(x) x = Dense(2)(x) x = Activation(activations.softmax)(x) model = Model(inputs=input_data, outputs=x, name=’Simple_Model’) return model\nCompile the model with categorical cross-entropy as loss—\nsimple_model.compile(optimizer=Adam(learning_rate=5e-3), loss='categorical_crossentropy', metrics=['acc'])\nTrain the model —\nsimple_model.fit(x_train, y_train, validation_split=0.2, epochs=5, shuffle=True, batch_size=256)\nTo truly understand the performance of the model, we need to plot the confusion matrix along with the precision, recall and F1 scores —\nWe see from the confusion matrix and other performance metric scores that as expected the network does extremely good to classify the real transactions but the recall value is below 50% for the fraud class. Our target is to test without changing anything except the loss function can we get better values for the performance metrics ?\nFirst let’s define the focal loss with alpha and gamma as hyper-parameters and to do this I have used the tfa module which is a functionality for TensorFlow maintained by SIG-addons (tfa). Under this module among the additional losses, there’s an implementation of Focal Loss and first we import as below —\nimport tensorflow_addons as tfafl = tfa.losses.SigmoidFocalCrossEntropy(alpha, gamma)\nUsing this, let’s define a custom loss function that can be used as a proxy for ‘Focal Loss’ for this specific problem with two classes—\ndef focal_loss_custom(alpha, gamma): def binary_focal_loss(y_true, y_pred): fl = tfa.losses.SigmoidFocalCrossEntropy(alpha=alpha, gamma=gamma) y_true_K = K.ones_like(y_true) focal_loss = fl(y_true, y_pred) return focal_loss return binary_focal_loss\nWe now just repeat the steps above for model definition, compile and fit but this time using focal loss as below —\nsimple_model.compile(optimizer=Adam(learning_rate=5e-3), loss=focal_loss_custom(alpha=0.2, gamma=2.0), metrics=[‘acc’])\nFor alpha and gamma parameters, I have just used the values suggested in the paper (however the problem is different) and different values need to be tested.\nsimple_model.fit(x_train, y_train, validation_split=0.2, epochs=5, shuffle=True, batch_size=256)\nUsing Focal Loss we see an improvement as below —\nWe see using ‘Focal Loss’ the performance metrics improved considerably and we could detect more ‘Fraud’ transactions (101/148) correctly compared to the previous case (69/148).\nHere in this post we discuss Focal Loss and how it can improve classification task when the data is highly imbalanced. To demonstrate Focal Loss in action we used Credit Card Transaction data-set which is highly biased towards real transactions and showed how Focal Loss improves the classification performance.\nI would also like to mention that , in my research with gamma-ray data we are trying to classify Active Galactic Nuclei (AGN) from Pulsars (PSR) and the gamma-ray sky is mostly populated by AGNs. The picture below is an example of such a simulated sky. This is also an example of class-imbalanced data-set in computer vision.\n[1] Focal Loss Original Paper\n[2] Focal Loss Original Presentation\n[3] Notebook Used in this Post: GitHub"},"parsed":{"kind":"list like","value":[{"code":null,"e":725,"s":47,"text":"In machine learning sometimes we are dealt with a very good hand like MNIST fashion data or CIFAR-10 data where the examples of each class in the data-set are well balanced. What happens if in a classification problem the distribution of examples across the known classes are biased or skewed ? Such problems with severe to slight bias in the data-set are common and today we will discuss an approach to handle such class imbalanced data. Let’s consider an extreme case of imbalanced data-set of mails and we build a classifier to detect spam mails. Since spam mails are relatively rarer, let’s consider 5% of all mails are spams. If we just a write a simple one line code as —"},{"code":null,"e":771,"s":725,"text":"def detectspam(mail-data): return ‘not spam’ "},{"code":null,"e":1551,"s":771,"text":"This will give us right answer 95% of time and even though this is an extreme hyperbole but you get the problem. Most importantly, training any model with this data will lead to high confidence prediction of the general mails and due to extreme low number of spam mails in the training data, the model will likely not learn to predict the spam mails correctly. This is why precision, recall, F1 score, ROC/AUC curves are the important metrics that truly tell us the story. As you have already guessed one way to reduce this issue is to do sampling to balance the data-set so that classes are balanced. There are several other ways to address class imbalance problem in machine learning and an excellent comprehensive review has been put together by Jason Brownlee, check it here."},{"code":null,"e":2425,"s":1551,"text":"In case of computer vision problem, this class imbalance problem can be more critical and here we discuss how the authors approached object detection tasks that lead to the development of focal loss. In case of Fast R-CNN type of algorithms, first we run an image through ConvNet to obtain a feature map and then region proposal is performed (generally around 2K regions) on the high resolution feature map. These are 2-stage detectors and when the Focal Loss paper was introduced the intriguing question was whether one stage detector like YOLO or SSD could obtain same accuracy as 2-stage detectors? One stage detectors were fast but the accuracy during that time was around 10–40% of the 2-stage detectors. The authors suggested that class imbalance during training as the main obstacle that prevents the one stage detectors to obtain same accuracy as 2-stage detectors."},{"code":null,"e":3217,"s":2425,"text":"An example of such class imbalance is shown in the self-explanatory Figure 1, which is taken from the presentation itself by the original authors. They found that one stage detectors perform better when there are higher number of bounding boxes covering the space of possible objects. But this approach caused a major problem as the foreground and background data are not equally distributed. For example if we consider 20000 bounding boxes mostly 7–10 of them will actually contain any info about the object and the remaining will be containing background and, mostly they will be easy to classify but uninformative. Here, the authors found out that Loss function (e.g. Cross-Entropy) is the main reason that the easy examples will distract the training. Below is a pictorial representation"},{"code":null,"e":3717,"s":3217,"text":"Even though the wrongly classified samples are penalized more (red arrow in fig. 1) than the correct ones (green arrow), in the dense object detection settings, due to the imbalanced sample size, the loss function is overwhelmed with background (easy samples). The Focal Loss addresses this problem and it is designed in such a way so that it reduces the loss (‘down-weight’) for the easy examples and thus the network can focus on training the hard examples. Below is the definition of Focal Loss —"},{"code":null,"e":4142,"s":3717,"text":"In focal loss, there’s a modulating factor multiplied to the Cross-Entropy loss. When a sample is misclassified, p (which represents model’s estimated probability for the class with label y = 1) is low and the modulating factor is near 1 and, the loss is unaffected. As p→1, the modulating factor approaches 0 and the loss for well-classified examples is down-weighted. The effect of γ parameter is shown in the plot below —"},{"code":null,"e":4168,"s":4142,"text":"To quote from the paper —"},{"code":null,"e":4300,"s":4168,"text":"The modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives low loss."},{"code":null,"e":4579,"s":4300,"text":"To understand this we will compare Cross-Entropy (CE) loss and Focal Loss using the definition above with γ = 2. Consider true value 1.0, and we consider 3 prediction values 0.90 (close), 0.95 (very close), 0.20 (far from true). Let’s see the loss values below using TensorFlow—"},{"code":null,"e":4912,"s":4579,"text":"CE loss when pred is close to true: 0.10536041CE loss when pred is very close to true: 0.051293183CE loss when pred is far from true: 1.6094373focal loss when pred is close to true: 0.0010536041110754007focal loss when pred is very close to true: 0.00012823295779526255focal loss when pred is far from true: 1.0300399017333985"},{"code":null,"e":5514,"s":4912,"text":"Here we see that compared to CE loss, the modulating factor in focal loss plays an important role. When prediction is close to the truth the loss is penalized way more than when when it is far. Importantly when prediction is 0.90 focal loss will be 0.01 × CE loss but when prediction is 0.95, focal loss will be around 0.002 × CE loss. Now we get a picture how focal loss reduces the loss contribution from easy examples and extends the range in which an example receives low loss. This can also be seen from fig. 3. Now we will use a real-world class imbalanced data-set and see focal loss in action."},{"code":null,"e":5738,"s":5514,"text":"Data-set Description: Here I have considered an extreme class-imbalanced data-set available in Kaggle and the data-set contains transactions made by credit cards in September 2013 by European cardholders. Let’s use pandas —"},{"code":null,"e":6324,"s":5738,"text":"This data-set presents transactions that occurred in two days and we have 284,807 number of transactions. Features V1, V2,...V28 are the principal components obtained with PCA (original features are not provided due to confidential issues) and the only features which have not been transformed with PCA are ‘Time’ and ‘Amount’. Feature ‘Time’ contains the seconds elapsed between each transaction and the first transaction in the dataset and the feature ‘Amount’ is the transaction amount. Feature ‘Class’ is the response variable and it takes value 1 in case of fraud and 0 otherwise."},{"code":null,"e":6578,"s":6324,"text":"Class Imbalance: Let’s plot the distribution of the ‘Class’ feature which tells us how many transactions are real and fake. As shown in figure 4 above, overwhelming numbers of transactions are real. Let’s get the numbers with this simple piece of code —"},{"code":null,"e":6745,"s":6578,"text":"print (‘real cases:‘, len(credit_df[credit_df[‘Class’]==0]))print (‘fraud cases: ‘, len(credit_df[credit_df[‘Class’]==1]))>>> real cases: 284315 fraud cases: 492"},{"code":null,"e":7249,"s":6745,"text":"So the class imbalance ratio is about 1:578, so for 578 real transactions we have one fraud case. First let’s use a simple neural network with cross-entropy loss to predict fraud and real transactions. But before that a little examination tells us that ‘Amount’ and ‘Time’ features are not scaled whereas other features ‘V1’, ‘V2’...etc are scaled. Here we can use StandardScaler/RobustScaler to scale these features and since RobustScaler are robust to outliers, I chose this standardization technique."},{"code":null,"e":7300,"s":7249,"text":"Let’s now choose the features and label as below —"},{"code":null,"e":7549,"s":7300,"text":"X_labels = credit_df.drop([‘Class’], axis=1)y_labels = credit_df[‘Class’]X_labels = X_labels.to_numpy(dtype=np.float64)y_labels = y_labels.to_numpy(dtype=np.float64)y_lab_cat = tf.keras.utils.to_categorical(y_labels, num_classes=2, dtype=’float32')"},{"code":null,"e":7620,"s":7549,"text":"For the train-test split we use stratify to keep the ratio of labels —"},{"code":null,"e":7742,"s":7620,"text":"x_train, x_test, y_train, y_test = train_test_split(X_labels, y_lab_cat, test_size=0.3, stratify=y_lab_cat, shuffle=True)"},{"code":null,"e":7803,"s":7742,"text":"Now we build a simple neural-net model with 3 dense layers —"},{"code":null,"e":8136,"s":7803,"text":"def simple_model(): input_data = Input(shape=(x_train.shape[1], )) x = Dense(64)(input_data) x = Activation(activations.relu)(x) x = Dense(32)(x) x = Activation(activations.relu)(x) x = Dense(2)(x) x = Activation(activations.softmax)(x) model = Model(inputs=input_data, outputs=x, name=’Simple_Model’) return model"},{"code":null,"e":8194,"s":8136,"text":"Compile the model with categorical cross-entropy as loss—"},{"code":null,"e":8301,"s":8194,"text":"simple_model.compile(optimizer=Adam(learning_rate=5e-3), loss='categorical_crossentropy', metrics=['acc'])"},{"code":null,"e":8319,"s":8301,"text":"Train the model —"},{"code":null,"e":8416,"s":8319,"text":"simple_model.fit(x_train, y_train, validation_split=0.2, epochs=5, shuffle=True, batch_size=256)"},{"code":null,"e":8552,"s":8416,"text":"To truly understand the performance of the model, we need to plot the confusion matrix along with the precision, recall and F1 scores —"},{"code":null,"e":8887,"s":8552,"text":"We see from the confusion matrix and other performance metric scores that as expected the network does extremely good to classify the real transactions but the recall value is below 50% for the fraud class. Our target is to test without changing anything except the loss function can we get better values for the performance metrics ?"},{"code":null,"e":9194,"s":8887,"text":"First let’s define the focal loss with alpha and gamma as hyper-parameters and to do this I have used the tfa module which is a functionality for TensorFlow maintained by SIG-addons (tfa). Under this module among the additional losses, there’s an implementation of Focal Loss and first we import as below —"},{"code":null,"e":9280,"s":9194,"text":"import tensorflow_addons as tfafl = tfa.losses.SigmoidFocalCrossEntropy(alpha, gamma)"},{"code":null,"e":9417,"s":9280,"text":"Using this, let’s define a custom loss function that can be used as a proxy for ‘Focal Loss’ for this specific problem with two classes—"},{"code":null,"e":9690,"s":9417,"text":"def focal_loss_custom(alpha, gamma): def binary_focal_loss(y_true, y_pred): fl = tfa.losses.SigmoidFocalCrossEntropy(alpha=alpha, gamma=gamma) y_true_K = K.ones_like(y_true) focal_loss = fl(y_true, y_pred) return focal_loss return binary_focal_loss"},{"code":null,"e":9805,"s":9690,"text":"We now just repeat the steps above for model definition, compile and fit but this time using focal loss as below —"},{"code":null,"e":9931,"s":9805,"text":"simple_model.compile(optimizer=Adam(learning_rate=5e-3), loss=focal_loss_custom(alpha=0.2, gamma=2.0), metrics=[‘acc’])"},{"code":null,"e":10089,"s":9931,"text":"For alpha and gamma parameters, I have just used the values suggested in the paper (however the problem is different) and different values need to be tested."},{"code":null,"e":10186,"s":10089,"text":"simple_model.fit(x_train, y_train, validation_split=0.2, epochs=5, shuffle=True, batch_size=256)"},{"code":null,"e":10236,"s":10186,"text":"Using Focal Loss we see an improvement as below —"},{"code":null,"e":10414,"s":10236,"text":"We see using ‘Focal Loss’ the performance metrics improved considerably and we could detect more ‘Fraud’ transactions (101/148) correctly compared to the previous case (69/148)."},{"code":null,"e":10726,"s":10414,"text":"Here in this post we discuss Focal Loss and how it can improve classification task when the data is highly imbalanced. To demonstrate Focal Loss in action we used Credit Card Transaction data-set which is highly biased towards real transactions and showed how Focal Loss improves the classification performance."},{"code":null,"e":11052,"s":10726,"text":"I would also like to mention that , in my research with gamma-ray data we are trying to classify Active Galactic Nuclei (AGN) from Pulsars (PSR) and the gamma-ray sky is mostly populated by AGNs. The picture below is an example of such a simulated sky. This is also an example of class-imbalanced data-set in computer vision."},{"code":null,"e":11082,"s":11052,"text":"[1] Focal Loss Original Paper"},{"code":null,"e":11119,"s":11082,"text":"[2] Focal Loss Original Presentation"}],"string":"[\n {\n \"code\": null,\n \"e\": 725,\n \"s\": 47,\n \"text\": \"In machine learning sometimes we are dealt with a very good hand like MNIST fashion data or CIFAR-10 data where the examples of each class in the data-set are well balanced. What happens if in a classification problem the distribution of examples across the known classes are biased or skewed ? Such problems with severe to slight bias in the data-set are common and today we will discuss an approach to handle such class imbalanced data. Let’s consider an extreme case of imbalanced data-set of mails and we build a classifier to detect spam mails. Since spam mails are relatively rarer, let’s consider 5% of all mails are spams. If we just a write a simple one line code as —\"\n },\n {\n \"code\": null,\n \"e\": 771,\n \"s\": 725,\n \"text\": \"def detectspam(mail-data): return ‘not spam’ \"\n },\n {\n \"code\": null,\n \"e\": 1551,\n \"s\": 771,\n \"text\": \"This will give us right answer 95% of time and even though this is an extreme hyperbole but you get the problem. Most importantly, training any model with this data will lead to high confidence prediction of the general mails and due to extreme low number of spam mails in the training data, the model will likely not learn to predict the spam mails correctly. This is why precision, recall, F1 score, ROC/AUC curves are the important metrics that truly tell us the story. As you have already guessed one way to reduce this issue is to do sampling to balance the data-set so that classes are balanced. There are several other ways to address class imbalance problem in machine learning and an excellent comprehensive review has been put together by Jason Brownlee, check it here.\"\n },\n {\n \"code\": null,\n \"e\": 2425,\n \"s\": 1551,\n \"text\": \"In case of computer vision problem, this class imbalance problem can be more critical and here we discuss how the authors approached object detection tasks that lead to the development of focal loss. In case of Fast R-CNN type of algorithms, first we run an image through ConvNet to obtain a feature map and then region proposal is performed (generally around 2K regions) on the high resolution feature map. These are 2-stage detectors and when the Focal Loss paper was introduced the intriguing question was whether one stage detector like YOLO or SSD could obtain same accuracy as 2-stage detectors? One stage detectors were fast but the accuracy during that time was around 10–40% of the 2-stage detectors. The authors suggested that class imbalance during training as the main obstacle that prevents the one stage detectors to obtain same accuracy as 2-stage detectors.\"\n },\n {\n \"code\": null,\n \"e\": 3217,\n \"s\": 2425,\n \"text\": \"An example of such class imbalance is shown in the self-explanatory Figure 1, which is taken from the presentation itself by the original authors. They found that one stage detectors perform better when there are higher number of bounding boxes covering the space of possible objects. But this approach caused a major problem as the foreground and background data are not equally distributed. For example if we consider 20000 bounding boxes mostly 7–10 of them will actually contain any info about the object and the remaining will be containing background and, mostly they will be easy to classify but uninformative. Here, the authors found out that Loss function (e.g. Cross-Entropy) is the main reason that the easy examples will distract the training. Below is a pictorial representation\"\n },\n {\n \"code\": null,\n \"e\": 3717,\n \"s\": 3217,\n \"text\": \"Even though the wrongly classified samples are penalized more (red arrow in fig. 1) than the correct ones (green arrow), in the dense object detection settings, due to the imbalanced sample size, the loss function is overwhelmed with background (easy samples). The Focal Loss addresses this problem and it is designed in such a way so that it reduces the loss (‘down-weight’) for the easy examples and thus the network can focus on training the hard examples. Below is the definition of Focal Loss —\"\n },\n {\n \"code\": null,\n \"e\": 4142,\n \"s\": 3717,\n \"text\": \"In focal loss, there’s a modulating factor multiplied to the Cross-Entropy loss. When a sample is misclassified, p (which represents model’s estimated probability for the class with label y = 1) is low and the modulating factor is near 1 and, the loss is unaffected. As p→1, the modulating factor approaches 0 and the loss for well-classified examples is down-weighted. The effect of γ parameter is shown in the plot below —\"\n },\n {\n \"code\": null,\n \"e\": 4168,\n \"s\": 4142,\n \"text\": \"To quote from the paper —\"\n },\n {\n \"code\": null,\n \"e\": 4300,\n \"s\": 4168,\n \"text\": \"The modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives low loss.\"\n },\n {\n \"code\": null,\n \"e\": 4579,\n \"s\": 4300,\n \"text\": \"To understand this we will compare Cross-Entropy (CE) loss and Focal Loss using the definition above with γ = 2. Consider true value 1.0, and we consider 3 prediction values 0.90 (close), 0.95 (very close), 0.20 (far from true). Let’s see the loss values below using TensorFlow—\"\n },\n {\n \"code\": null,\n \"e\": 4912,\n \"s\": 4579,\n \"text\": \"CE loss when pred is close to true: 0.10536041CE loss when pred is very close to true: 0.051293183CE loss when pred is far from true: 1.6094373focal loss when pred is close to true: 0.0010536041110754007focal loss when pred is very close to true: 0.00012823295779526255focal loss when pred is far from true: 1.0300399017333985\"\n },\n {\n \"code\": null,\n \"e\": 5514,\n \"s\": 4912,\n \"text\": \"Here we see that compared to CE loss, the modulating factor in focal loss plays an important role. When prediction is close to the truth the loss is penalized way more than when when it is far. Importantly when prediction is 0.90 focal loss will be 0.01 × CE loss but when prediction is 0.95, focal loss will be around 0.002 × CE loss. Now we get a picture how focal loss reduces the loss contribution from easy examples and extends the range in which an example receives low loss. This can also be seen from fig. 3. Now we will use a real-world class imbalanced data-set and see focal loss in action.\"\n },\n {\n \"code\": null,\n \"e\": 5738,\n \"s\": 5514,\n \"text\": \"Data-set Description: Here I have considered an extreme class-imbalanced data-set available in Kaggle and the data-set contains transactions made by credit cards in September 2013 by European cardholders. Let’s use pandas —\"\n },\n {\n \"code\": null,\n \"e\": 6324,\n \"s\": 5738,\n \"text\": \"This data-set presents transactions that occurred in two days and we have 284,807 number of transactions. Features V1, V2,...V28 are the principal components obtained with PCA (original features are not provided due to confidential issues) and the only features which have not been transformed with PCA are ‘Time’ and ‘Amount’. Feature ‘Time’ contains the seconds elapsed between each transaction and the first transaction in the dataset and the feature ‘Amount’ is the transaction amount. Feature ‘Class’ is the response variable and it takes value 1 in case of fraud and 0 otherwise.\"\n },\n {\n \"code\": null,\n \"e\": 6578,\n \"s\": 6324,\n \"text\": \"Class Imbalance: Let’s plot the distribution of the ‘Class’ feature which tells us how many transactions are real and fake. As shown in figure 4 above, overwhelming numbers of transactions are real. Let’s get the numbers with this simple piece of code —\"\n },\n {\n \"code\": null,\n \"e\": 6745,\n \"s\": 6578,\n \"text\": \"print (‘real cases:‘, len(credit_df[credit_df[‘Class’]==0]))print (‘fraud cases: ‘, len(credit_df[credit_df[‘Class’]==1]))>>> real cases: 284315 fraud cases: 492\"\n },\n {\n \"code\": null,\n \"e\": 7249,\n \"s\": 6745,\n \"text\": \"So the class imbalance ratio is about 1:578, so for 578 real transactions we have one fraud case. First let’s use a simple neural network with cross-entropy loss to predict fraud and real transactions. But before that a little examination tells us that ‘Amount’ and ‘Time’ features are not scaled whereas other features ‘V1’, ‘V2’...etc are scaled. Here we can use StandardScaler/RobustScaler to scale these features and since RobustScaler are robust to outliers, I chose this standardization technique.\"\n },\n {\n \"code\": null,\n \"e\": 7300,\n \"s\": 7249,\n \"text\": \"Let’s now choose the features and label as below —\"\n },\n {\n \"code\": null,\n \"e\": 7549,\n \"s\": 7300,\n \"text\": \"X_labels = credit_df.drop([‘Class’], axis=1)y_labels = credit_df[‘Class’]X_labels = X_labels.to_numpy(dtype=np.float64)y_labels = y_labels.to_numpy(dtype=np.float64)y_lab_cat = tf.keras.utils.to_categorical(y_labels, num_classes=2, dtype=’float32')\"\n },\n {\n \"code\": null,\n \"e\": 7620,\n \"s\": 7549,\n \"text\": \"For the train-test split we use stratify to keep the ratio of labels —\"\n },\n {\n \"code\": null,\n \"e\": 7742,\n \"s\": 7620,\n \"text\": \"x_train, x_test, y_train, y_test = train_test_split(X_labels, y_lab_cat, test_size=0.3, stratify=y_lab_cat, shuffle=True)\"\n },\n {\n \"code\": null,\n \"e\": 7803,\n \"s\": 7742,\n \"text\": \"Now we build a simple neural-net model with 3 dense layers —\"\n },\n {\n \"code\": null,\n \"e\": 8136,\n \"s\": 7803,\n \"text\": \"def simple_model(): input_data = Input(shape=(x_train.shape[1], )) x = Dense(64)(input_data) x = Activation(activations.relu)(x) x = Dense(32)(x) x = Activation(activations.relu)(x) x = Dense(2)(x) x = Activation(activations.softmax)(x) model = Model(inputs=input_data, outputs=x, name=’Simple_Model’) return model\"\n },\n {\n \"code\": null,\n \"e\": 8194,\n \"s\": 8136,\n \"text\": \"Compile the model with categorical cross-entropy as loss—\"\n },\n {\n \"code\": null,\n \"e\": 8301,\n \"s\": 8194,\n \"text\": \"simple_model.compile(optimizer=Adam(learning_rate=5e-3), loss='categorical_crossentropy', metrics=['acc'])\"\n },\n {\n \"code\": null,\n \"e\": 8319,\n \"s\": 8301,\n \"text\": \"Train the model —\"\n },\n {\n \"code\": null,\n \"e\": 8416,\n \"s\": 8319,\n \"text\": \"simple_model.fit(x_train, y_train, validation_split=0.2, epochs=5, shuffle=True, batch_size=256)\"\n },\n {\n \"code\": null,\n \"e\": 8552,\n \"s\": 8416,\n \"text\": \"To truly understand the performance of the model, we need to plot the confusion matrix along with the precision, recall and F1 scores —\"\n },\n {\n \"code\": null,\n \"e\": 8887,\n \"s\": 8552,\n \"text\": \"We see from the confusion matrix and other performance metric scores that as expected the network does extremely good to classify the real transactions but the recall value is below 50% for the fraud class. Our target is to test without changing anything except the loss function can we get better values for the performance metrics ?\"\n },\n {\n \"code\": null,\n \"e\": 9194,\n \"s\": 8887,\n \"text\": \"First let’s define the focal loss with alpha and gamma as hyper-parameters and to do this I have used the tfa module which is a functionality for TensorFlow maintained by SIG-addons (tfa). Under this module among the additional losses, there’s an implementation of Focal Loss and first we import as below —\"\n },\n {\n \"code\": null,\n \"e\": 9280,\n \"s\": 9194,\n \"text\": \"import tensorflow_addons as tfafl = tfa.losses.SigmoidFocalCrossEntropy(alpha, gamma)\"\n },\n {\n \"code\": null,\n \"e\": 9417,\n \"s\": 9280,\n \"text\": \"Using this, let’s define a custom loss function that can be used as a proxy for ‘Focal Loss’ for this specific problem with two classes—\"\n },\n {\n \"code\": null,\n \"e\": 9690,\n \"s\": 9417,\n \"text\": \"def focal_loss_custom(alpha, gamma): def binary_focal_loss(y_true, y_pred): fl = tfa.losses.SigmoidFocalCrossEntropy(alpha=alpha, gamma=gamma) y_true_K = K.ones_like(y_true) focal_loss = fl(y_true, y_pred) return focal_loss return binary_focal_loss\"\n },\n {\n \"code\": null,\n \"e\": 9805,\n \"s\": 9690,\n \"text\": \"We now just repeat the steps above for model definition, compile and fit but this time using focal loss as below —\"\n },\n {\n \"code\": null,\n \"e\": 9931,\n \"s\": 9805,\n \"text\": \"simple_model.compile(optimizer=Adam(learning_rate=5e-3), loss=focal_loss_custom(alpha=0.2, gamma=2.0), metrics=[‘acc’])\"\n },\n {\n \"code\": null,\n \"e\": 10089,\n \"s\": 9931,\n \"text\": \"For alpha and gamma parameters, I have just used the values suggested in the paper (however the problem is different) and different values need to be tested.\"\n },\n {\n \"code\": null,\n \"e\": 10186,\n \"s\": 10089,\n \"text\": \"simple_model.fit(x_train, y_train, validation_split=0.2, epochs=5, shuffle=True, batch_size=256)\"\n },\n {\n \"code\": null,\n \"e\": 10236,\n \"s\": 10186,\n \"text\": \"Using Focal Loss we see an improvement as below —\"\n },\n {\n \"code\": null,\n \"e\": 10414,\n \"s\": 10236,\n \"text\": \"We see using ‘Focal Loss’ the performance metrics improved considerably and we could detect more ‘Fraud’ transactions (101/148) correctly compared to the previous case (69/148).\"\n },\n {\n \"code\": null,\n \"e\": 10726,\n \"s\": 10414,\n \"text\": \"Here in this post we discuss Focal Loss and how it can improve classification task when the data is highly imbalanced. To demonstrate Focal Loss in action we used Credit Card Transaction data-set which is highly biased towards real transactions and showed how Focal Loss improves the classification performance.\"\n },\n {\n \"code\": null,\n \"e\": 11052,\n \"s\": 10726,\n \"text\": \"I would also like to mention that , in my research with gamma-ray data we are trying to classify Active Galactic Nuclei (AGN) from Pulsars (PSR) and the gamma-ray sky is mostly populated by AGNs. The picture below is an example of such a simulated sky. This is also an example of class-imbalanced data-set in computer vision.\"\n },\n {\n \"code\": null,\n \"e\": 11082,\n \"s\": 11052,\n \"text\": \"[1] Focal Loss Original Paper\"\n },\n {\n \"code\": null,\n \"e\": 11119,\n \"s\": 11082,\n \"text\": \"[2] Focal Loss Original Presentation\"\n }\n]"}}},{"rowIdx":592,"cells":{"title":{"kind":"string","value":"Matrix Chain Multiplication"},"text":{"kind":"string","value":"If a chain of matrices is given, we have to find the minimum number of the correct sequence of matrices to multiply.\nWe know that the matrix multiplication is associative, so four matrices ABCD, we can multiply A(BCD), (AB)(CD), (ABC)D, A(BC)D, in these sequences. Like these sequences, our task is to find which ordering is efficient to multiply.\nIn the given input there is an array say arr, which contains arr[] = {1, 2, 3, 4}. It means the matrices are of the order (1 x 2), (2 x 3), (3 x 4).\nInput:\nThe orders of the input matrices. {1, 2, 3, 4}. It means the matrices are\n{(1 x 2), (2 x 3), (3 x 4)}.\nOutput:\nMinimum number of operations need multiply these three matrices. Here the result is 18.\nmatOrder(array, n)\nInput − List of matrices, the number of matrices in the list.\nOutput − Minimum number of matrix multiplication.\nBegin\n define table minMul of size n x n, initially fill with all 0s\n for length := 2 to n, do\n fir i:=1 to n-length, do\n j := i + length – 1\n minMul[i, j] := ∞\n for k := i to j-1, do\n q := minMul[i, k] + minMul[k+1, j] + array[i-1]*array[k]*array[j]\n if q < minMul[i, j], then minMul[i, j] := q\n done\n done\n done\n return minMul[1, n-1]\nEnd\n#include\nusing namespace std;\n\nint matOrder(int array[], int n) {\n int minMul[n][n]; //holds the number of scalar multiplication needed\n\n for (int i=1; i\nusing namespace std;\n\nint matOrder(int array[], int n) {\n int minMul[n][n]; //holds the number of scalar multiplication needed\n\n for (int i=1; i\\nusing namespace std;\\n\\nint matOrder(int array[], int n) {\\n int minMul[n][n]; //holds the number of scalar multiplication needed\\n\\n for (int i=1; i