Will google still index a page if I use a $_SESSION variable?web widget and SEORemember me or not?Regarding Google Index -> Remove URLsPagination and duplicate contentWebsite visitors with same PHP session ID, same cookies, but different IPs and user agents, all within one second. What are they and how to stop them?google displays wrong language on multi language website because it takes cookies into considerationWhen will Google report correct results with its fetch tool?SEO question in regards to current way of loading my pagesSEO for URLs that are only accessible to users with a specific session variableISP Config broke my PHP sessions and cookies

Ideas for 3rd eye abilities

Is there a way to make member function NOT callable from constructor?

"listening to me about as much as you're listening to this pole here"

Is Fable (1996) connected in any way to the Fable franchise from Lionhead Studios?

Does a dangling wire really electrocute me if I'm standing in water?

Add an angle to a sphere

What does it exactly mean if a random variable follows a distribution

Domain expired, GoDaddy holds it and is asking more money

Why is my log file so massive? 22gb. I am running log backups

What is the meaning of "of trouble" in the following sentence?

Symmetry in quantum mechanics

If a centaur druid Wild Shapes into a Giant Elk, do their Charge features stack?

How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?

Why is the design of haulage companies so “special”?

How to deal with fear of taking dependencies

Shall I use personal or official e-mail account when registering to external websites for work purpose?

What is GPS' 19 year rollover and does it present a cybersecurity issue?

Where else does the Shulchan Aruch quote an authority by name?

What does 'script /dev/null' do?

Email Account under attack (really) - anything I can do?

Finding files for which a command fails

What is the command to reset a PC without deleting any files

Why did the Germans forbid the possession of pet pigeons in Rostov-on-Don in 1941?

How to make payment on the internet without leaving a money trail?



Will google still index a page if I use a $_SESSION variable?


web widget and SEORemember me or not?Regarding Google Index -> Remove URLsPagination and duplicate contentWebsite visitors with same PHP session ID, same cookies, but different IPs and user agents, all within one second. What are they and how to stop them?google displays wrong language on multi language website because it takes cookies into considerationWhen will Google report correct results with its fetch tool?SEO question in regards to current way of loading my pagesSEO for URLs that are only accessible to users with a specific session variableISP Config broke my PHP sessions and cookies






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2















For a couple of pages on our site, I'm writing a widget that relies on what it displays on other pages to determine what it displays on the current page. Basically, the purpose is just to ensure there's no duplicate content. It's not individualized per user, just depended on what's being displayed elsewhere at the current given time.



My CTO will not allow me to save the data in the database or even a log file to keep a persisting state, so to accomplish this the only other way I can think of is set a $_SESSION variable and store the persisting state there. However, I'm realizing google's bot probably doesn't use cookies so I'm not sure if this will work.



Does anyone know if Google will still index the pages if what they're displaying relies on a session variable? If not, is there another way to store a persisting state across pages that doesn't use the db or log file that googlebot will understand?










share|improve this question







New contributor




Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 1





    As to "is there another way to store a persisting state without a db" - take a leaf out of ASP.Net WebForms book and take a look at "ViewState" - basically shoves all the state into an encrypted string in a form, and treats all page navigation as POST. Messy, but works.

    – Moo
    Apr 5 at 0:51











  • @Moo does googlebot respect this? And how would I do that in PHP, all I can find is ASP.NET implementations.

    – Mitchell Lewis
    Apr 5 at 2:03

















2















For a couple of pages on our site, I'm writing a widget that relies on what it displays on other pages to determine what it displays on the current page. Basically, the purpose is just to ensure there's no duplicate content. It's not individualized per user, just depended on what's being displayed elsewhere at the current given time.



My CTO will not allow me to save the data in the database or even a log file to keep a persisting state, so to accomplish this the only other way I can think of is set a $_SESSION variable and store the persisting state there. However, I'm realizing google's bot probably doesn't use cookies so I'm not sure if this will work.



Does anyone know if Google will still index the pages if what they're displaying relies on a session variable? If not, is there another way to store a persisting state across pages that doesn't use the db or log file that googlebot will understand?










share|improve this question







New contributor




Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.















  • 1





    As to "is there another way to store a persisting state without a db" - take a leaf out of ASP.Net WebForms book and take a look at "ViewState" - basically shoves all the state into an encrypted string in a form, and treats all page navigation as POST. Messy, but works.

    – Moo
    Apr 5 at 0:51











  • @Moo does googlebot respect this? And how would I do that in PHP, all I can find is ASP.NET implementations.

    – Mitchell Lewis
    Apr 5 at 2:03













2












2








2








For a couple of pages on our site, I'm writing a widget that relies on what it displays on other pages to determine what it displays on the current page. Basically, the purpose is just to ensure there's no duplicate content. It's not individualized per user, just depended on what's being displayed elsewhere at the current given time.



My CTO will not allow me to save the data in the database or even a log file to keep a persisting state, so to accomplish this the only other way I can think of is set a $_SESSION variable and store the persisting state there. However, I'm realizing google's bot probably doesn't use cookies so I'm not sure if this will work.



Does anyone know if Google will still index the pages if what they're displaying relies on a session variable? If not, is there another way to store a persisting state across pages that doesn't use the db or log file that googlebot will understand?










share|improve this question







New contributor




Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












For a couple of pages on our site, I'm writing a widget that relies on what it displays on other pages to determine what it displays on the current page. Basically, the purpose is just to ensure there's no duplicate content. It's not individualized per user, just depended on what's being displayed elsewhere at the current given time.



My CTO will not allow me to save the data in the database or even a log file to keep a persisting state, so to accomplish this the only other way I can think of is set a $_SESSION variable and store the persisting state there. However, I'm realizing google's bot probably doesn't use cookies so I'm not sure if this will work.



Does anyone know if Google will still index the pages if what they're displaying relies on a session variable? If not, is there another way to store a persisting state across pages that doesn't use the db or log file that googlebot will understand?







seo php googlebot session






share|improve this question







New contributor




Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question







New contributor




Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question






New contributor




Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Apr 4 at 20:30









Mitchell LewisMitchell Lewis

111




111




New contributor




Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Mitchell Lewis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







  • 1





    As to "is there another way to store a persisting state without a db" - take a leaf out of ASP.Net WebForms book and take a look at "ViewState" - basically shoves all the state into an encrypted string in a form, and treats all page navigation as POST. Messy, but works.

    – Moo
    Apr 5 at 0:51











  • @Moo does googlebot respect this? And how would I do that in PHP, all I can find is ASP.NET implementations.

    – Mitchell Lewis
    Apr 5 at 2:03












  • 1





    As to "is there another way to store a persisting state without a db" - take a leaf out of ASP.Net WebForms book and take a look at "ViewState" - basically shoves all the state into an encrypted string in a form, and treats all page navigation as POST. Messy, but works.

    – Moo
    Apr 5 at 0:51











  • @Moo does googlebot respect this? And how would I do that in PHP, all I can find is ASP.NET implementations.

    – Mitchell Lewis
    Apr 5 at 2:03







1




1





As to "is there another way to store a persisting state without a db" - take a leaf out of ASP.Net WebForms book and take a look at "ViewState" - basically shoves all the state into an encrypted string in a form, and treats all page navigation as POST. Messy, but works.

– Moo
Apr 5 at 0:51





As to "is there another way to store a persisting state without a db" - take a leaf out of ASP.Net WebForms book and take a look at "ViewState" - basically shoves all the state into an encrypted string in a form, and treats all page navigation as POST. Messy, but works.

– Moo
Apr 5 at 0:51













@Moo does googlebot respect this? And how would I do that in PHP, all I can find is ASP.NET implementations.

– Mitchell Lewis
Apr 5 at 2:03





@Moo does googlebot respect this? And how would I do that in PHP, all I can find is ASP.NET implementations.

– Mitchell Lewis
Apr 5 at 2:03










3 Answers
3






active

oldest

votes


















3














Back in September 2018 John Mueller from Google tweeted:



@jakebohall tweets: "@JohnMu Any chance you'd share a few examples/ideas of when/why Googlebot uses cookies? I haven't come across anything recent on this other than WRS clearing them across page loads..." @JohnMu replies: "We almost never use them. If you need cookie support to view a page, it's almost certain that we wouldn't be able to index it. If you notice it in an audit, fix it (remove the dependency), don't assume that maybe it'll work :-)."



Also see:




Source: https://www.seroundtable.com/google-cookies-seo-26344.html



Google's John Mueller said on Twitter that Google almost certainly
cannot index a page that requires cookies. He said if you want Google
to index the page, make sure to "remove the dependency" on cookies.







share|improve this answer

























  • So is there another way to store a persisting state so when googlebot crawls one page I can ensure there won't be any duplicate content on the next page they crawl?

    – Mitchell Lewis
    Apr 5 at 18:28



















1














What do you do when a user visits the site for the first time? Presumably you calculate what needs to be displayed, display it, and "cache" something in the session (as you mention). Every Googlebot visit is like the user's first time visit (as @Simon mentions - the Googlebot does not use cookies, so no session data can persist).



So, assuming you do display this content to the user on their first visit then GoogleBot will also see this content, except that it will need to be calculated (which could be slow?) on every request.






share|improve this answer























  • Yes, but does this mean it treats every page as a first time user? The calculation cost is what it is, but if a user sees one page then sees the other page it needs to not have duplicate content so I need to know what was on the first page to ensure that. So according to John Mu a session variable wouldn't work, but would the google bot just not be able to index the page at all? I want to leave the code in so at least I can say I wrote the checks whether googlebot actually cares or not is fine by me as long as they can still index the page. Will storing a variable in session block indexing.?

    – Mitchell Lewis
    Apr 5 at 2:01











  • Yes, every page the Googlebot visits is seen as a first time user (assuming your session storage is based on a cookie). You could potentially track the (Google)bot based on some hash (IP + User-Agent perhaps), but you say you're unable to store additional information in a server-side database/log. But this may not be particularly accurate anyway. Googlebot doesn't necessarily "crawl" in a consistent manner - like a user - it builds a list of URLs to visit and these could be visited in any order.

    – MrWhite
    yesterday











  • If Googlebot crawls (and indexes) different content to what a user sees when visiting the same URL then that is also a problem (and could be considered cloaking). Indexing is based on the URL... one URL = one page of content.

    – MrWhite
    yesterday


















1














Use a hash of the UTC date and the ip address, then use the hash as a seed to a random number generator, use the random number generator to generate a permutation of which content goes to which page.



Results: random page content that varies over time while being static per user, unique content per page, no state stored anywhere (not in cookies, not in session url parameters, not in db, etc.), compatible with all search engines.



downside: you need a static list of all the urls in order to ensure that each url has a unique piece of content. Every page request would have to map content to each url in the same random pattern (based on the static seed hashed from the date+etc.). Doing this requires processing time that is linearly proportional with the number of urls.






share|improve this answer










New contributor




Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • That's exactly how it works now with just the date, but even if I hash their IP aswell that still wouldn't help with the problem, which is that after a user goes to another page what it displays there is dependent on what they saw on the original page to ensure no duplicate content

    – Mitchell Lewis
    Apr 5 at 18:26












  • I should add that there are content pools the randoms are pulling from, these could have overlapping content and aren't in a guaranteed distribution so even with different arrays of randoms duplicate content is unlikely but not impossible & I need guaranteed unique content as a specification for the task.

    – Mitchell Lewis
    Apr 5 at 18:45











  • @MitchellLewis I answered to quickly, the added downside is that you would have to have a static list of all the possible urls. This is required to ensure a unique mapping for each url.

    – Brenda.ZMPOV
    7 hours ago











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "45"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






Mitchell Lewis is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fwebmasters.stackexchange.com%2fquestions%2f122057%2fwill-google-still-index-a-page-if-i-use-a-session-variable%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









3














Back in September 2018 John Mueller from Google tweeted:



@jakebohall tweets: "@JohnMu Any chance you'd share a few examples/ideas of when/why Googlebot uses cookies? I haven't come across anything recent on this other than WRS clearing them across page loads..." @JohnMu replies: "We almost never use them. If you need cookie support to view a page, it's almost certain that we wouldn't be able to index it. If you notice it in an audit, fix it (remove the dependency), don't assume that maybe it'll work :-)."



Also see:




Source: https://www.seroundtable.com/google-cookies-seo-26344.html



Google's John Mueller said on Twitter that Google almost certainly
cannot index a page that requires cookies. He said if you want Google
to index the page, make sure to "remove the dependency" on cookies.







share|improve this answer

























  • So is there another way to store a persisting state so when googlebot crawls one page I can ensure there won't be any duplicate content on the next page they crawl?

    – Mitchell Lewis
    Apr 5 at 18:28
















3














Back in September 2018 John Mueller from Google tweeted:



@jakebohall tweets: "@JohnMu Any chance you'd share a few examples/ideas of when/why Googlebot uses cookies? I haven't come across anything recent on this other than WRS clearing them across page loads..." @JohnMu replies: "We almost never use them. If you need cookie support to view a page, it's almost certain that we wouldn't be able to index it. If you notice it in an audit, fix it (remove the dependency), don't assume that maybe it'll work :-)."



Also see:




Source: https://www.seroundtable.com/google-cookies-seo-26344.html



Google's John Mueller said on Twitter that Google almost certainly
cannot index a page that requires cookies. He said if you want Google
to index the page, make sure to "remove the dependency" on cookies.







share|improve this answer

























  • So is there another way to store a persisting state so when googlebot crawls one page I can ensure there won't be any duplicate content on the next page they crawl?

    – Mitchell Lewis
    Apr 5 at 18:28














3












3








3







Back in September 2018 John Mueller from Google tweeted:



@jakebohall tweets: "@JohnMu Any chance you'd share a few examples/ideas of when/why Googlebot uses cookies? I haven't come across anything recent on this other than WRS clearing them across page loads..." @JohnMu replies: "We almost never use them. If you need cookie support to view a page, it's almost certain that we wouldn't be able to index it. If you notice it in an audit, fix it (remove the dependency), don't assume that maybe it'll work :-)."



Also see:




Source: https://www.seroundtable.com/google-cookies-seo-26344.html



Google's John Mueller said on Twitter that Google almost certainly
cannot index a page that requires cookies. He said if you want Google
to index the page, make sure to "remove the dependency" on cookies.







share|improve this answer















Back in September 2018 John Mueller from Google tweeted:



@jakebohall tweets: "@JohnMu Any chance you'd share a few examples/ideas of when/why Googlebot uses cookies? I haven't come across anything recent on this other than WRS clearing them across page loads..." @JohnMu replies: "We almost never use them. If you need cookie support to view a page, it's almost certain that we wouldn't be able to index it. If you notice it in an audit, fix it (remove the dependency), don't assume that maybe it'll work :-)."



Also see:




Source: https://www.seroundtable.com/google-cookies-seo-26344.html



Google's John Mueller said on Twitter that Google almost certainly
cannot index a page that requires cookies. He said if you want Google
to index the page, make sure to "remove the dependency" on cookies.








share|improve this answer














share|improve this answer



share|improve this answer








edited Apr 5 at 5:18









AuxTaco

1032




1032










answered Apr 4 at 21:30









Simon HayterSimon Hayter

30.1k645101




30.1k645101












  • So is there another way to store a persisting state so when googlebot crawls one page I can ensure there won't be any duplicate content on the next page they crawl?

    – Mitchell Lewis
    Apr 5 at 18:28


















  • So is there another way to store a persisting state so when googlebot crawls one page I can ensure there won't be any duplicate content on the next page they crawl?

    – Mitchell Lewis
    Apr 5 at 18:28

















So is there another way to store a persisting state so when googlebot crawls one page I can ensure there won't be any duplicate content on the next page they crawl?

– Mitchell Lewis
Apr 5 at 18:28






So is there another way to store a persisting state so when googlebot crawls one page I can ensure there won't be any duplicate content on the next page they crawl?

– Mitchell Lewis
Apr 5 at 18:28














1














What do you do when a user visits the site for the first time? Presumably you calculate what needs to be displayed, display it, and "cache" something in the session (as you mention). Every Googlebot visit is like the user's first time visit (as @Simon mentions - the Googlebot does not use cookies, so no session data can persist).



So, assuming you do display this content to the user on their first visit then GoogleBot will also see this content, except that it will need to be calculated (which could be slow?) on every request.






share|improve this answer























  • Yes, but does this mean it treats every page as a first time user? The calculation cost is what it is, but if a user sees one page then sees the other page it needs to not have duplicate content so I need to know what was on the first page to ensure that. So according to John Mu a session variable wouldn't work, but would the google bot just not be able to index the page at all? I want to leave the code in so at least I can say I wrote the checks whether googlebot actually cares or not is fine by me as long as they can still index the page. Will storing a variable in session block indexing.?

    – Mitchell Lewis
    Apr 5 at 2:01











  • Yes, every page the Googlebot visits is seen as a first time user (assuming your session storage is based on a cookie). You could potentially track the (Google)bot based on some hash (IP + User-Agent perhaps), but you say you're unable to store additional information in a server-side database/log. But this may not be particularly accurate anyway. Googlebot doesn't necessarily "crawl" in a consistent manner - like a user - it builds a list of URLs to visit and these could be visited in any order.

    – MrWhite
    yesterday











  • If Googlebot crawls (and indexes) different content to what a user sees when visiting the same URL then that is also a problem (and could be considered cloaking). Indexing is based on the URL... one URL = one page of content.

    – MrWhite
    yesterday















1














What do you do when a user visits the site for the first time? Presumably you calculate what needs to be displayed, display it, and "cache" something in the session (as you mention). Every Googlebot visit is like the user's first time visit (as @Simon mentions - the Googlebot does not use cookies, so no session data can persist).



So, assuming you do display this content to the user on their first visit then GoogleBot will also see this content, except that it will need to be calculated (which could be slow?) on every request.






share|improve this answer























  • Yes, but does this mean it treats every page as a first time user? The calculation cost is what it is, but if a user sees one page then sees the other page it needs to not have duplicate content so I need to know what was on the first page to ensure that. So according to John Mu a session variable wouldn't work, but would the google bot just not be able to index the page at all? I want to leave the code in so at least I can say I wrote the checks whether googlebot actually cares or not is fine by me as long as they can still index the page. Will storing a variable in session block indexing.?

    – Mitchell Lewis
    Apr 5 at 2:01











  • Yes, every page the Googlebot visits is seen as a first time user (assuming your session storage is based on a cookie). You could potentially track the (Google)bot based on some hash (IP + User-Agent perhaps), but you say you're unable to store additional information in a server-side database/log. But this may not be particularly accurate anyway. Googlebot doesn't necessarily "crawl" in a consistent manner - like a user - it builds a list of URLs to visit and these could be visited in any order.

    – MrWhite
    yesterday











  • If Googlebot crawls (and indexes) different content to what a user sees when visiting the same URL then that is also a problem (and could be considered cloaking). Indexing is based on the URL... one URL = one page of content.

    – MrWhite
    yesterday













1












1








1







What do you do when a user visits the site for the first time? Presumably you calculate what needs to be displayed, display it, and "cache" something in the session (as you mention). Every Googlebot visit is like the user's first time visit (as @Simon mentions - the Googlebot does not use cookies, so no session data can persist).



So, assuming you do display this content to the user on their first visit then GoogleBot will also see this content, except that it will need to be calculated (which could be slow?) on every request.






share|improve this answer













What do you do when a user visits the site for the first time? Presumably you calculate what needs to be displayed, display it, and "cache" something in the session (as you mention). Every Googlebot visit is like the user's first time visit (as @Simon mentions - the Googlebot does not use cookies, so no session data can persist).



So, assuming you do display this content to the user on their first visit then GoogleBot will also see this content, except that it will need to be calculated (which could be slow?) on every request.







share|improve this answer












share|improve this answer



share|improve this answer










answered Apr 4 at 22:39









MrWhiteMrWhite

32k33367




32k33367












  • Yes, but does this mean it treats every page as a first time user? The calculation cost is what it is, but if a user sees one page then sees the other page it needs to not have duplicate content so I need to know what was on the first page to ensure that. So according to John Mu a session variable wouldn't work, but would the google bot just not be able to index the page at all? I want to leave the code in so at least I can say I wrote the checks whether googlebot actually cares or not is fine by me as long as they can still index the page. Will storing a variable in session block indexing.?

    – Mitchell Lewis
    Apr 5 at 2:01











  • Yes, every page the Googlebot visits is seen as a first time user (assuming your session storage is based on a cookie). You could potentially track the (Google)bot based on some hash (IP + User-Agent perhaps), but you say you're unable to store additional information in a server-side database/log. But this may not be particularly accurate anyway. Googlebot doesn't necessarily "crawl" in a consistent manner - like a user - it builds a list of URLs to visit and these could be visited in any order.

    – MrWhite
    yesterday











  • If Googlebot crawls (and indexes) different content to what a user sees when visiting the same URL then that is also a problem (and could be considered cloaking). Indexing is based on the URL... one URL = one page of content.

    – MrWhite
    yesterday

















  • Yes, but does this mean it treats every page as a first time user? The calculation cost is what it is, but if a user sees one page then sees the other page it needs to not have duplicate content so I need to know what was on the first page to ensure that. So according to John Mu a session variable wouldn't work, but would the google bot just not be able to index the page at all? I want to leave the code in so at least I can say I wrote the checks whether googlebot actually cares or not is fine by me as long as they can still index the page. Will storing a variable in session block indexing.?

    – Mitchell Lewis
    Apr 5 at 2:01











  • Yes, every page the Googlebot visits is seen as a first time user (assuming your session storage is based on a cookie). You could potentially track the (Google)bot based on some hash (IP + User-Agent perhaps), but you say you're unable to store additional information in a server-side database/log. But this may not be particularly accurate anyway. Googlebot doesn't necessarily "crawl" in a consistent manner - like a user - it builds a list of URLs to visit and these could be visited in any order.

    – MrWhite
    yesterday











  • If Googlebot crawls (and indexes) different content to what a user sees when visiting the same URL then that is also a problem (and could be considered cloaking). Indexing is based on the URL... one URL = one page of content.

    – MrWhite
    yesterday
















Yes, but does this mean it treats every page as a first time user? The calculation cost is what it is, but if a user sees one page then sees the other page it needs to not have duplicate content so I need to know what was on the first page to ensure that. So according to John Mu a session variable wouldn't work, but would the google bot just not be able to index the page at all? I want to leave the code in so at least I can say I wrote the checks whether googlebot actually cares or not is fine by me as long as they can still index the page. Will storing a variable in session block indexing.?

– Mitchell Lewis
Apr 5 at 2:01





Yes, but does this mean it treats every page as a first time user? The calculation cost is what it is, but if a user sees one page then sees the other page it needs to not have duplicate content so I need to know what was on the first page to ensure that. So according to John Mu a session variable wouldn't work, but would the google bot just not be able to index the page at all? I want to leave the code in so at least I can say I wrote the checks whether googlebot actually cares or not is fine by me as long as they can still index the page. Will storing a variable in session block indexing.?

– Mitchell Lewis
Apr 5 at 2:01













Yes, every page the Googlebot visits is seen as a first time user (assuming your session storage is based on a cookie). You could potentially track the (Google)bot based on some hash (IP + User-Agent perhaps), but you say you're unable to store additional information in a server-side database/log. But this may not be particularly accurate anyway. Googlebot doesn't necessarily "crawl" in a consistent manner - like a user - it builds a list of URLs to visit and these could be visited in any order.

– MrWhite
yesterday





Yes, every page the Googlebot visits is seen as a first time user (assuming your session storage is based on a cookie). You could potentially track the (Google)bot based on some hash (IP + User-Agent perhaps), but you say you're unable to store additional information in a server-side database/log. But this may not be particularly accurate anyway. Googlebot doesn't necessarily "crawl" in a consistent manner - like a user - it builds a list of URLs to visit and these could be visited in any order.

– MrWhite
yesterday













If Googlebot crawls (and indexes) different content to what a user sees when visiting the same URL then that is also a problem (and could be considered cloaking). Indexing is based on the URL... one URL = one page of content.

– MrWhite
yesterday





If Googlebot crawls (and indexes) different content to what a user sees when visiting the same URL then that is also a problem (and could be considered cloaking). Indexing is based on the URL... one URL = one page of content.

– MrWhite
yesterday











1














Use a hash of the UTC date and the ip address, then use the hash as a seed to a random number generator, use the random number generator to generate a permutation of which content goes to which page.



Results: random page content that varies over time while being static per user, unique content per page, no state stored anywhere (not in cookies, not in session url parameters, not in db, etc.), compatible with all search engines.



downside: you need a static list of all the urls in order to ensure that each url has a unique piece of content. Every page request would have to map content to each url in the same random pattern (based on the static seed hashed from the date+etc.). Doing this requires processing time that is linearly proportional with the number of urls.






share|improve this answer










New contributor




Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • That's exactly how it works now with just the date, but even if I hash their IP aswell that still wouldn't help with the problem, which is that after a user goes to another page what it displays there is dependent on what they saw on the original page to ensure no duplicate content

    – Mitchell Lewis
    Apr 5 at 18:26












  • I should add that there are content pools the randoms are pulling from, these could have overlapping content and aren't in a guaranteed distribution so even with different arrays of randoms duplicate content is unlikely but not impossible & I need guaranteed unique content as a specification for the task.

    – Mitchell Lewis
    Apr 5 at 18:45











  • @MitchellLewis I answered to quickly, the added downside is that you would have to have a static list of all the possible urls. This is required to ensure a unique mapping for each url.

    – Brenda.ZMPOV
    7 hours ago















1














Use a hash of the UTC date and the ip address, then use the hash as a seed to a random number generator, use the random number generator to generate a permutation of which content goes to which page.



Results: random page content that varies over time while being static per user, unique content per page, no state stored anywhere (not in cookies, not in session url parameters, not in db, etc.), compatible with all search engines.



downside: you need a static list of all the urls in order to ensure that each url has a unique piece of content. Every page request would have to map content to each url in the same random pattern (based on the static seed hashed from the date+etc.). Doing this requires processing time that is linearly proportional with the number of urls.






share|improve this answer










New contributor




Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • That's exactly how it works now with just the date, but even if I hash their IP aswell that still wouldn't help with the problem, which is that after a user goes to another page what it displays there is dependent on what they saw on the original page to ensure no duplicate content

    – Mitchell Lewis
    Apr 5 at 18:26












  • I should add that there are content pools the randoms are pulling from, these could have overlapping content and aren't in a guaranteed distribution so even with different arrays of randoms duplicate content is unlikely but not impossible & I need guaranteed unique content as a specification for the task.

    – Mitchell Lewis
    Apr 5 at 18:45











  • @MitchellLewis I answered to quickly, the added downside is that you would have to have a static list of all the possible urls. This is required to ensure a unique mapping for each url.

    – Brenda.ZMPOV
    7 hours ago













1












1








1







Use a hash of the UTC date and the ip address, then use the hash as a seed to a random number generator, use the random number generator to generate a permutation of which content goes to which page.



Results: random page content that varies over time while being static per user, unique content per page, no state stored anywhere (not in cookies, not in session url parameters, not in db, etc.), compatible with all search engines.



downside: you need a static list of all the urls in order to ensure that each url has a unique piece of content. Every page request would have to map content to each url in the same random pattern (based on the static seed hashed from the date+etc.). Doing this requires processing time that is linearly proportional with the number of urls.






share|improve this answer










New contributor




Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










Use a hash of the UTC date and the ip address, then use the hash as a seed to a random number generator, use the random number generator to generate a permutation of which content goes to which page.



Results: random page content that varies over time while being static per user, unique content per page, no state stored anywhere (not in cookies, not in session url parameters, not in db, etc.), compatible with all search engines.



downside: you need a static list of all the urls in order to ensure that each url has a unique piece of content. Every page request would have to map content to each url in the same random pattern (based on the static seed hashed from the date+etc.). Doing this requires processing time that is linearly proportional with the number of urls.







share|improve this answer










New contributor




Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this answer



share|improve this answer








edited 7 hours ago





















New contributor




Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









answered Apr 5 at 17:12









Brenda.ZMPOVBrenda.ZMPOV

113




113




New contributor




Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Brenda.ZMPOV is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • That's exactly how it works now with just the date, but even if I hash their IP aswell that still wouldn't help with the problem, which is that after a user goes to another page what it displays there is dependent on what they saw on the original page to ensure no duplicate content

    – Mitchell Lewis
    Apr 5 at 18:26












  • I should add that there are content pools the randoms are pulling from, these could have overlapping content and aren't in a guaranteed distribution so even with different arrays of randoms duplicate content is unlikely but not impossible & I need guaranteed unique content as a specification for the task.

    – Mitchell Lewis
    Apr 5 at 18:45











  • @MitchellLewis I answered to quickly, the added downside is that you would have to have a static list of all the possible urls. This is required to ensure a unique mapping for each url.

    – Brenda.ZMPOV
    7 hours ago

















  • That's exactly how it works now with just the date, but even if I hash their IP aswell that still wouldn't help with the problem, which is that after a user goes to another page what it displays there is dependent on what they saw on the original page to ensure no duplicate content

    – Mitchell Lewis
    Apr 5 at 18:26












  • I should add that there are content pools the randoms are pulling from, these could have overlapping content and aren't in a guaranteed distribution so even with different arrays of randoms duplicate content is unlikely but not impossible & I need guaranteed unique content as a specification for the task.

    – Mitchell Lewis
    Apr 5 at 18:45











  • @MitchellLewis I answered to quickly, the added downside is that you would have to have a static list of all the possible urls. This is required to ensure a unique mapping for each url.

    – Brenda.ZMPOV
    7 hours ago
















That's exactly how it works now with just the date, but even if I hash their IP aswell that still wouldn't help with the problem, which is that after a user goes to another page what it displays there is dependent on what they saw on the original page to ensure no duplicate content

– Mitchell Lewis
Apr 5 at 18:26






That's exactly how it works now with just the date, but even if I hash their IP aswell that still wouldn't help with the problem, which is that after a user goes to another page what it displays there is dependent on what they saw on the original page to ensure no duplicate content

– Mitchell Lewis
Apr 5 at 18:26














I should add that there are content pools the randoms are pulling from, these could have overlapping content and aren't in a guaranteed distribution so even with different arrays of randoms duplicate content is unlikely but not impossible & I need guaranteed unique content as a specification for the task.

– Mitchell Lewis
Apr 5 at 18:45





I should add that there are content pools the randoms are pulling from, these could have overlapping content and aren't in a guaranteed distribution so even with different arrays of randoms duplicate content is unlikely but not impossible & I need guaranteed unique content as a specification for the task.

– Mitchell Lewis
Apr 5 at 18:45













@MitchellLewis I answered to quickly, the added downside is that you would have to have a static list of all the possible urls. This is required to ensure a unique mapping for each url.

– Brenda.ZMPOV
7 hours ago





@MitchellLewis I answered to quickly, the added downside is that you would have to have a static list of all the possible urls. This is required to ensure a unique mapping for each url.

– Brenda.ZMPOV
7 hours ago










Mitchell Lewis is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















Mitchell Lewis is a new contributor. Be nice, and check out our Code of Conduct.












Mitchell Lewis is a new contributor. Be nice, and check out our Code of Conduct.











Mitchell Lewis is a new contributor. Be nice, and check out our Code of Conduct.














Thanks for contributing an answer to Webmasters Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fwebmasters.stackexchange.com%2fquestions%2f122057%2fwill-google-still-index-a-page-if-i-use-a-session-variable%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020