Tuesday, 12 October 2021

Nginx: Cannot allocate memory

Recently I faced one weird issue with Nginx and suddenly getting an error on restarting Nginx "Cannot allocate memory" issue as showing in the following image.



When I tried free -g command, memory was available and there was no recent configuration change.  Then I started looking into config files and found this line in one of them:

#Working for odoobiz
proxy_cache_path /var/odoo/bzcache/ levels=1:2 keys_zone=my_cache:6000m max_size=6g inactive=60m use_temp_path=off;

Here I was setting more memory - around 6G and available was only 4G for cache, and when I changed the settings to the following, it worked!

#Working for odoobiz
proxy_cache_path /var/odoo/bzcache/ levels=1:2 keys_zone=my_cache:2000m max_size=3g inactive=60m use_temp_path=off;

So, if you are facing any such issue, please check your configuration files and see if somewhere you are allocating more memory then available.

I hope this post will help someone else as well!

Thanks! Enjoy Programming!

Tuesday, 28 September 2021

Firefox ServiceWorker Error: When hitting Odoo homepage in Firefox


In this article, I am writing about the error I faced only in the Firefox browser while accessing the home page of the website developed in Odoo.

Issue: It was throwing this error to me in Firefox browser




First I thought, it's due to some javascript error or something related to Odoo and after spending some time, I was convinced that it's a Firefox bug. Here are the reference links:

https://github.com/mozilla/send/issues/1222

Now the question is, why does this occurs and how to avoid it. Actually, user settings for the client browser play a big role here. Please make sure that the "Delete cookies and site data when Firefox is closed" option is not selected as shown in the screenshot.


If you remove this tick, it will work immediately.

But if you want this tick set, you can use "Manage Exceptions". Add your website and click on the "Allow"  button.



Then it works, even with the "Delete cookies and site data when Firefox closes" option.

Try ticking your Firefox and you will get the problem immediately. I have already tested this multiple times at my end.


I hope it will help others as well facing the same issue.


Thanks!! Enjoy Programming!

Friday, 17 September 2021

Elasticsearch Explained: Trying to create too many scroll contexts. Must be less than or equal to 500

Hello Everyone, today we are going to discuss the following Error in Elasticsearch

"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting"

Let's try to understand why this occurs and how we can solve it.





When & Why this error trigger?


As the title indicates, this error will come if you are using scroll API and especially multiple scrolls

Scrolls are expensive to run concurrently and reserves the resources for that particular time.

For each scroll ID, there is a unique point-in-time view of the current set of segments preserved for that scroll. This hangs on to files and related caches that would otherwise be removed by the constant segment rewriting that happens while indexing is active. This is why it is especially resource-intensive to do concurrently.

Let's dive a little deeper.

In order to use scrolling, the initial search request should specify the scroll parameter in the query string, which tells Elasticsearch how long it should keep the “search context” alive. Its value (e.g. 1m) does not need to be long enough to process all data — it just needs to be long enough to process the previous batch of results. Each scroll request (with the scroll parameter) sets a new expiry time. If a scroll request doesn’t pass in the scroll parameter, then the search context will be freed as part of that scroll request.



POST /twitter/_search?scroll=1m
{
    "size": 100,
    "query": {
        "match" : {
               "title" : "elasticsearch"
          }
    }
}


Normally, the background merge process optimizes the index by merging together smaller segments to create new bigger segments, at which time the smaller segments are deleted. This process continues during scrolling, but an open search context prevents the old segments from being deleted while they are still in use. This is how Elasticsearch is able to return the results of the initial search request, regardless of subsequent changes to documents.


How to Prevent & Fix it?

Now we know that concurrent scroll requests with more scroll time (60m) can use resources extensively and cause this issue.

In case you got this error and are not able to perform any update or delete operations on your cluster, either clear your scrolls or increase the size of max_open_scroll_context for a limited amount of time, till your scrolls are not cleared automatically within the specified time. It's not a recommended solution but to avoid any data loss or ongoing scroll APIs, this can be your savior.


Clear Scroll API:

Search contexts are automatically removed when the scroll timeout has been exceeded. However keeping scrolls open has a cost, and should be explicitly cleared as soon as the scroll is not being used anymore using the clear-scroll API:


DELETE /_search/scroll
{
 "scroll_id" : 
    "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="

}



Increase the size of max_open_scroll_context

To prevent the against issues caused by having too many scrolls open, you can limit the number of open scrolls per node with the search.max_open_scroll_context cluster setting (defaults to unlimited).


To check default size, please use this command:

http://127.0.0.1:9200/_cluster/settings?include_defaults=true&pretty=true


To update max_open_scroll_context size, you can use the following command.

curl -X PUT http://ip:9200/_cluster/settings -H 'Content-Type: application/json' -d'{
    "persistent" : {
        "search.max_open_scroll_context": 5000
    },
    "transient": {
        "search.max_open_scroll_context": 5000
    }
}'


Note: Don't forget to set it back to the lower number, once scroll time is elapsed already.


Thanks! Enjoy Programming!!


Reference Links:

https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-request-scroll.html


Friday, 30 July 2021

Elasticsearch: Copy Index Structure


Use Case: Let's say, you already have an index on one Server1 and you want to create a new index with the exact structure on Server2. What you should do?

Solution: 

1. Copy the index structure from Server1. Let's say your index name is: product-data-index and you can access the settings here: https://x.x.x.x:9200/product-data-index/_settings

{
"product-data-index": {
"settings": {
"index": {
"routing": {
"allocation": {
"include": {
"_tier_preference": "data_content"
}
}
},
"number_of_shards": "5",
"provided_name": "product-data-index",
"max_result_window": "30000",
"creation_date": "1627383249968",
"analysis": {
"normalizer": {
"lowercaseNorm": {
"filter": [
"lowercase",
"asciifolding"
],
"type": "custom"
}
},
"analyzer": {
"comma_analyzer": {
"filter": [
"lowercase"
],
"pattern": "(,)",
"type": "pattern",
"tokenizer": "standard"
}
}
},
"number_of_replicas": "1",
"uuid": "cftuOgIPSKWONmbqfICH0w",
"version": {
"created": "7060299"
}
}
}
}
}

Copy this settings JSON

2. Clean the settings JSON.

Now before creating an index, you have to remove few fields from the above JSON

"product-data-index"
"uuid"
"version"
"
creation_date"

and it should look like this.

{
"settings": {
"index": {
"routing": {
"allocation": {
"include": {
"_tier_preference": "data_content"
}
}
},
"number_of_shards": "5",
"max_result_window": "30000",
"analysis": {
"normalizer": {
"lowercaseNorm": {
"filter": [
"lowercase",
"asciifolding"
],
"type": "custom"
}
},
"analyzer": {
"comma_analyzer": {
"filter": [
"lowercase"
],
"pattern": "(,)",
"type": "pattern",
"tokenizer": "standard"
}
}
},
"number_of_replicas": "1"
}
}
}
}

3. Run this command on Server2 in postman as per the screenshot. You can use any other API Client
which can help you to create the index. You can also use curl.



That's it.


NOTE: To copy data from Server1 to Server2, you can use Reindex API


Thanks!!! Enjoy Programming :)

Monday, 12 July 2021

JamfAAD (Intune) on Macs - Sign-in Errors

 Hi,

Today, I faced one weird problem when I changed my O365 password.




This pop-up was appearing every few minutes after pressing the cancel button and it was throwing me to 404 URL after pressing the continue button which was completely irritating and frustrating.

After spending some time looking into this issue, I found a small trick.

1. Open the Safari browser and make it the default one.

2. Next time when this popup comes, press the Continue button.

That's it and the issue is resolved. :)


Thanks!! Enjoy Programming!! :)



Friday, 2 July 2021

Manual Odoo Migration from One Server to Another



Today, we will go through a step-by-step process to move running odoo from one server to another server

Note: I assume odoo is already installed and working fine with demo/test DB.




Before migrating your Odoo application from one server to another server, it's important to take all the precautionary measures and backup all the data - Database, Code, and Files. Let's start:


I - Backup

Step-1. Database Backup 

You can use Odoo Database Manager - /web/database/manager link to perform this activity. But if the database is too big, please use pg_dump command from PostgreSQL. Here is the example:

pg_dump -W -F t odoo13 > ~/odoo13_28062021.tar

Note: Make sure that you are logged in as postgres or odoo user to perform this activity.


Step-2. Backup of custom modules

The next step is to take a backup of all of your custom code. Mostly, It will be in your /odoo/custom/ and /odoo/enterprise/ directories.

You can use scp command to directly copy your directories from one server to another server. Let's say both of your machines are Linux ones, then these commands can help you.

scp -r(for recursive directories) <source> <destination>

scp -r /odoo/custom/ odoo@x.x.x.x.com:/odoo/


Step-3. Backup of your attachments/images

The last step is to take a backup of your application attachments - order attachments, images attached to tickets, website and product images, etc. You can find the filestore at this location: 

/odoo/.local/share/Odoo/filestore/

You can copy these files also directly to the new server.


II - Restore

Step-1. Restore Database 

You can use Odoo Database Manager - /web/database/manager link to perform this activity. But if the database is too big, please use pg_restore command from PostgreSQL. Here is the command:

pg_restore -v --dbname=odoo13 odoo13_28062021.tar

Note: Make sure that you are logged in as postgres or odoo user to perform this activity and database is already created with name.


Step-2. Restoring custom modules

We have already copied these files in step-2 of the Backup section activity. If not done, please use the scp command to copy files at the new server location.

Note: Make sure that odoo is the owner of all these files. You can use this command to change ownership of the directory and subdirectory.

chown -R odoo:odoo /odoo/custom/


Step-3. Restore filestore

We have already copied these files in step-3 of the Backup section activity. If not done, please use the scp command to copy files at the new server location.


III - Reload Odoo

Once all data - Database, custom modules, and filestore is restored on the new server, the next step is to start odoo with the latest data. I would suggest doing it using the shell/command line if you are a developer.

Step-1. Stop all odoo services

ps ax | grep odoo-bin | awk '{print $1}' | xargs kill -9 $1

Step-2. Start odoo services.

python3 /odoo/odoo-server/odoo-bin -c /etc/odoo-server.conf -d odoo13 -u all

Once all the modules are updated successfully, you can just stop the odoo services using step-1 command and start it using.

sudo service odoo-server start

Step-3. Reload assets - log in as admin and reload assets

I hope everything went well as expected. If your UI is distorted, there is a possibility that fingerprints generated for js and css files are not the same as in filestore. To fix it we have to reload assets.

HTTP://x.x.x.x:8069/web?debug=assets



I hope this POST will help someone, someday and save his/her day!


Thanks!! Enjoy Programming :)

Thursday, 25 March 2021

Odoo Error: The 'odoo.addons.web' package was not installed in a way that PackageLoader understands.

 If you are facing this error in odoo12



It means the proper package version of Jinja2 is not installed.


Try installing this one:

pip3 install Jinja2==2.10.1

It should solve your problem.


Thanks!!! Enjoy Programming! :)

Monday, 1 February 2021

Elasticsearch Error: Format version is not supported



When I downgraded Elasticsearch 7.10 to 7.6, I was not able to restart the elasticsearch service and was facing these errors:



and 



After diagnosing the issue and going through a number of web URLs, I come to know that the following line in the elasticsearch.yml file will fix my issue.


cluster.initial_master_nodes: ["x.x.x.x"]


OR 


cluster.initial_master_nodes: master-node-a    


I hope it will help someone and save their time too.


Thanks!!! Enjoy Programming!! :)

Nginx: Cannot allocate memory